TSS Article: MDA: Nice idea, shame about the...

Discussions

News: TSS Article: MDA: Nice idea, shame about the...

  1. Dan Haywood has written an in-depth critique on MDA. He looks at the goals of MDA, the parts and pieces, what is right about MDA, and the various issues. He ends up thinking that MDA isn't a total write-off: "What MDA has contributed then is the definition of a problem".

    Concluding thoughts
    I hope I've at least triggered some sceptism about whether MDA can work as described by the OMG. But I'm not the only sceptic. The OMG are very vocal about vendor buy in from the likes of IBM, HP, Sun and Borland, claiming at the same time that, yes, MDA is the end of programming as we know it. But those vendors are clearly hedging their bets. For example, witness IBM's huge investment in "traditional" IDEs like Eclipse, not to mention its investments in aspect oriented programming. Indeed, in March 2004 at the AOSD conference IBM's CTO Daniel Sabbah stated "AOP is vital for our survival". That's a very bold statement). And consider the millions of VB programmers that Sun is after with its Project Rave: Sun isn't trying to get them to move to Java by offering them an MDA solution. As for Microsoft? Noticeable by its absence.

    But even though I'm sceptical of MDA as described, I do think the problem that the OMG set itself when it came up with the MDA ideas is one worth solving. And that may be its biggest contribution, providing a focus for the use of powerful new technologies such as AOP. But the OMG's own solution to that problem: not for me, thanks.
    Read MDA: Nice idea, shame about the...

    Threaded Messages (46)

  2. MDA Journal: The MDA Marketing Message and the MDA Reality
    http://www.bptrends.com/

    This article also has similar opinions and seems to indicate that the OMG is aware of the hype.
    100% code generation might not really happen. It might not even be needed. The primary objective seems to be the focus on modeling and transformations. Seems that OMG is on the right track with these standardization efforts eventhough the tools are not up to the mark.

    If you forget the 100% code generation part then issues like MOF-compliant domain-specific languages makes sense.
  3. Its important to appreciate that the generation of source code (whether automated or by hand, and in whichever language) is NOT the prime objective of software engineering projets (unless specifically a new language definition project). The objective (even though differing for each project effort, is to create and deploy a system that serves a certain capability/purpose.

    It is very dissappointing to note the extremely obtuse focus on source code (in specific languages) in this and many other fora. A formal (programming) language's purpose is to express an idea/concept, essentially a model specification in itself. As a matter of fact the modeling languages like UML (NOT MOF) are rarely even at a different leve of abstraction than the "programming" languages. Why is it so difficult to view the graphical/textual "expression" as capturing anything different from what Java/C++ express ? I doubt if the compactness or even the expressiveness of the language (whether graphical or textual) necessarily changes anything. I for one, regard a fully qualified UML (maybe in some cases with some extensions) and any given language (java/c#) on the same plane.

    And as for transformation...well, one way I look at the PIM is, is that its a formal expression of the requirements/capabilities of the system, and the PSM is the expression of the requirements in the context of deployment constraints of the environment of the system.

    Regarding the "elaborationist" view, its fairly a shallow appreciation of simply reducing the donkey work of writing braindead portions of the code, but pretty much in the same line of thinking; i.e. the sole objective being creating source code rather than a system.

    The "transformationist" (or whatever its called nowadays) view, being a simple matter of mating the requirements to constraints.

    And no, PSMs don't need to refer to operating systems specifically. From whatever I can understand, PSM could potentially any deployment scenario. And yes, given that its so fashionable to use design patterns, thats an example in itself ... for instance, take a design pattern as a PIM and transform it into a PSM, say for the web (in the for of JSPs) or web + app server (e.g. JSPs and ejbs) etc etc etc.

    Yes, the tools as of today are probably not that mature, but keep the faith. Don't dismiss MDA, because its what you do anyways.
  4. Source code is NOT the software[ Go to top ]

    It’s important to appreciate that the generation of source code (whether automated or by hand, and in whichever language) is NOT the prime objective of software engineering projects. The objective (even though differing for each project effort) is to create and deploy a system that serves a certain capability/purpose.
    Wow. Well put Ajay. I wish more people realized this.
    Yes, the tools as of today are probably not that mature, but keep the faith. Don't dismiss MDA, because its what you do anyways.
    Again, great post Ajay. So many of the MDA doubters forget or don’t realize that MDA is really an attempt at a formalization and atomization of what we as developers do (consciously or unconsciously) in the development life cycle. Why not enlist the help of the computer to assist the process?

    And finally, what is this obsession with synchronizing changes to the code with the model?!? The model and the code should be at different levels of abstraction anyway. Otherwise the model is just a visualization of the code. Do we make changes to bytecode when there is a project change request? Do we touch the hotspot generated machine code? If there are changes required one should make changes as high in the food chain as possible. That is, if 80% of your code is generated from models then changes at that abstraction level should be made in the model and then the code should be regenerated (the tool should know enough not to change the 20% that you had to coded by hand)! Changes that affect your hand coded 20% should be made at the code level. And your models will remain unaffected.

    Current tool quality / lack of understanding of forward engineering may make this type of development harder then it should be. But IMHO we should all be charging hard into this way of doing things. Anyway just my 2 cents.

    Again, Ajay excellent post. I could not have said it better myself.
  5. I've been investigating the introduction of MDA into some organisations that I consult for. The main reservations that I have encountered (and hence why they are not doing MDA yet) are not related in any way to the technology of MDA, but more towards the associated costs and political issues.....

    - Lack of modelling skills. Many of the organisations that I work with have excellent teams of coders but very few skilled modellers who are capable with UML. When you consider the completeness of UML models required to make MDA work it is very unlikley that these organisations could support the investment in training/additional staff required while remaining competitive in today's very tight economy.

    - The need to write custom transformations. Although we are moving closer to architecture and middleware being a commodity there is still much innovation, competing approaches and minor differences between products and frameworks.
    The organisations I have been involved with are generally very reluctant to get involved with the effort of writing and maintaining their own transformations - due to the new skills required and the need to maintain skilled staff in this area. However, the standard transformations provided with many MDA tools seem not to be particularly useful in all but the simplest cases and writing transformations is still a very skilled task.

    - Loss of previous investment. Very few companies (especially those in the Java space) seem to be doing total 'green-field' development now days. Most have at least couple of years of experience under their belt. This manifests itself as existing components, libraries, object models, frameworks and so on. To bring MDA into this environment would require extensive custom transformations, remodeling much that has already been written and proven, or scrapping everything and starting from scratch. All of these are very difficult to sell to a management.


    So what's the solution? MDA is only going to get better and better and hopefully MDA tools will become more advanced and support a wider range of standard transformations on to different blends of comoditised architecture (in my case I would love to see Tapestry, Spring & Hibernate). Also, easier ways to specify transformations will help significantly. However, the issues of modelling skills and loss of previous investment are so huge that it would be a very brave company to jump on this level of investment and a very brave consultant to recommend that they do so! That said, I'm starting to use MDA on a couple of personal projects and perhaps if many others do the same we may see 'backdoor' introducation of MDA via people who have attained the skills independently - and can bring their own transformations to the party.
  6. MDA and Refactoring[ Go to top ]

    One of the things that I like to do regularly with code I develop in is to refactor, nothing major, just renaming a few variables and methods, and inlining and extracting variables and methods. I was wondering if existing MDA tools would handle such a task. The ones I have so far do not lend well to that concept and changing the model involves me remembering to change the code.
  7. One of the things that I like to do regularly with code I develop in is to refactor, nothing major, just renaming a few variables and methods, and inlining and extracting variables and methods. I was wondering if existing MDA tools would handle such a task. The ones I have so far do not lend well to that concept and changing the model involves me remembering to change the code.
    You're comparing disimilar things. You admit that you regularly refactor source code, but that's since (even a small block of) source code is a tangle of mixed concerns. A normalized information model, arrived at with a methodology such as Shlaer-Mellor, has fine grained classes that don't need to be reshuffled as often as source code refactoring is usually done.

    Refactoring is usually a reaction to feature change. Hand maintentance can't respond as fast to feature flux as MDA can. Refining a model takes less effort than refining hand code, and TMC's maintenance case study agreed. That's since a model is more abstract than hand code.
  8. Refactoring is usually a reaction to feature change. Hand maintentance can't respond as fast to feature flux as MDA can. Refining a model takes less effort than refining hand code, and TMC's maintenance case study agreed. That's since a model is more abstract than hand code.
    In spite of being an MDA guy (the father of AndroMDA in fact), I cannot agree here. Refactoring a model and regenerating the source code is still much more difficult for today's code generators than for an advanced IDE like Eclipse. Imagine renaming a class, for example. If you regenerate, most code generators (including AndroMDA) create source code for a new class and leave the old class alone. But if the generator would rename the old generated class(es) into the new one(s), the old hand-written code that uses the generated class(es) would break and would have to be refactored, too.

    Renaming a class is only the simplest case. Imagine a "move method" refactoring: The generated method(s) must move, the hand-written implementation must move, too. Hand-written code must be updated by hand.

    Refactoring at the model level is non-trivial - we're still trying to make it work. In the vision document for the AndroMDA project, we describe the idea that the code generator runs inside Eclipse and tells Eclipse to refactor the hand-written code automatically after the generator has modified the generated code.

    Cheers...
    Matthias

    P.S.: Pieter van Gorp at the University of Antwerp is running a research project on that subject. Ask him for more info.
  9. Refactoring a model and regenerating the source code is still much more difficult for today's code generators than for an advanced IDE like Eclipse.
    You seem to be disputing TheMiddlewareCompany's maintenance shootout.
  10. Refactoring is different than general application maintenance. The productivity shootout focused on general maintenance, meaning enhancements (entirely new features) and in some cases, redesign.

    Refactoring, on the other hand, makes a code base maintenance-ready by improving the design, without adding or taking away features. If we define refactoring in an MDA environment to mean automated improvement to the generated code only, then yes, MDA dramatically outshines hand coding, and the study certainly showed this. You'll note how the ServiceLocator pattern just sort of showed up when they regenerated code with a newer version of OptimalJ.

    But making a model-driven environment deal with refactoring the entire code base, including hand-coded (that is, non-modeled) source, is a more difficult problem.
  11. MDA seems to be promising, but how do we migrate our
    existing application to MDA ? Is there any easy way ?

    We use our homegrown framework which is based on MVC
    pattern. There are 11 J2EE application based on this
    framework. We develope about 2 J2EE applications per
    year. Current 11 J2EE applications are mission critical
    and are proven for about 1 to 4 years. We were
    investigating OptimalJ for our MDA initiative, but we
    were told that we have to start from creating models in
    "Domain model" (OptimalJ terminology). This means
    re-implementing our requirements analysis there.
    We can re-use some of our busiess tier, but we have
    to identify these classes manually. This is because
    we were not consistint at times and at places. This
    process seems to be time-consuming.

    From organization's point of view, we would like all
    our J2EE applications on one or other platform. This
    helps in skillset management.

    I am sure, you folks would have come across like this
    situation. If so, how have you approached ?

    Responses are highly appreciated.
    Mahendra
  12. Migration to MDA[ Go to top ]

    Not sure where does this stand but most of the MDA modeling tools support XMI. Furthre, you can check out my white paper at http://www.iasahome.org/iasaweb/appmanager/home/content?_nfpb=true&_pageLabel=content_articles_page&content_articles_portletid=36301&content_articles_portletchannel=11 - Rajiv Parikh
  13. Personally, since we started using AOP for our base architecture we have not found any need to generate any classes whatsoever. We did generate some descriptors, but after moving to WebWork2 those are gone too. So right now we don't use any generation at all. All those needs, in our case, have been made obsolete by the introduction of aspects.
  14. So were you using MDA (or another form of generation)? If so, what were you using? How has AOP removed your dependancy on generation, they don't seem to be competing technologies. In fact, they seem entirely complimentary, you could use MDA to generate a static model and then impose custom functionality using AOP which could solve the traditional problem of code generation, over-write custom code.

    It'd be very interesting to hear more of your use of AOP in this regard.
  15. So were you using MDA (or another form of generation)? If so, what were you using? How has AOP removed your dependancy on generation, they don't seem to be competing technologies. In fact, they seem entirely complimentary, you could use MDA to generate a static model and then impose custom functionality using AOP which could solve the traditional problem of code generation, over-write custom code.It'd be very interesting to hear more of your use of AOP in this regard.
    I agree, I'm currently busy transforming OCL into other languages, plain Java code as well as AOP aspects come to mind. The goal is to have validation routines generated from the PIM, and integrate the generated code with a project. I've just started but until now it seems a very elegant solution, and also very completementary to MDA with UML.


    -- Wouter.
  16. I agree, I'm currently busy transforming OCL into other languages, plain Java code as well as AOP aspects come to mind. The goal is to have validation routines generated from the PIM, and integrate the generated code with a project. I've just started but until now it seems a very elegant solution, and also very completementary to MDA with UML
    How is this done ? Are you generating code yourself ?

    Ours is a domain related to insurance, credit card etc. It looks like going for domain-specific language based on MOF would generate much of the boiler plate code. The other aspect is the generation of code like validation routines etc.
  17. How is this done ? Are you generating code yourself ?Ours is a domain related to insurance, credit card etc. It looks like going for domain-specific language based on MOF would generate much of the boiler plate code. The other aspect is the generation of code like validation routines etc.
    Well, we're currently integrating another open-source tool called OCLTF, it has been developed some time ago by Chad Brandon, one of the main committers on the AndroMDA team. Basically it uses the Visitor pattern to have a callback for each individual token in the grammar, not unlike SAX for example.

    It is not a trivial problem, translating a grammar into another, but this tool is both elegant and pragmatic in its approach. I was able to get started in an evening or two.

    http://ocltf.sourceforge.net

    a short tutorial is here:

    http://ocltf.sourceforge.net/ocltf-translation-libraries/developing.html

    The tutorial helps you test your translations in a sandbox environment.

    The author wrote example translation libraries that transform into EJB-QL and Hibernate-QL

    I have found OCL to be a very simple and natural language, and a good extension to UML. Again, with QVT around the corner its importance will be even more apparent. But as always, it's up the you to decide whether you like it or not, if you have a better or easier way to get things done you will probably decide not to use OCL.

    For people not familiar with OCL (Object Constraint Language), you need to know this about it:

    1) OCL describes post-conditions, pre-condition and invariants (conditions that always must evaluate "true")

    2) you can navigate through your objects and associations using the dot "." operator

    3) special features are accessed using the arrow "->" operator, for example "a->isEmpty()" is the OCL counterpart of "a == null" or "a.isEmpty()" (depending on the fact "a" is a java.lang.Collection or not)

    You can add OCL on pretty much any model element in UML, but most people only seem to use it on Classes. Personally I use it also on operations and in activity graphs (transitions, action states, ...)

    anyway, all this is going to integrated into AndroMDA, probably at the end of this week if all goes well, out idea is to have it ready for AndroMDA 3.0M2


    -- Wouter
  18. So were you using MDA (or another form of generation)? If so, what were you using?
    I was using XDoclet /w EJB, to generate all the usual stuff.
    How has AOP removed your dependancy on generation, they don't seem to be competing technologies.
    By allowing me to use designs and architectures that doesn't "require" generating loads of extra code, like DTO's and such.

    In a sense you are right: they are not competing technologies. But in my case all the problems with "usual" designs, which implicated generating lots of code for this and that, disappeared. In this sense code generation became obsolete, at least for the stuff I was doing.
    In fact, they seem entirely complimentary, you could use MDA to generate a static model and then impose custom functionality using AOP which could solve the traditional problem of code generation, over-write custom code.It'd be very interesting to hear more of your use of AOP in this regard.
    Sure, that'd work. Essentially you could generate a model which doesn't have any framework code at all, and then apply that through AOP instead of code generation. What you'd generate, then, are pointcuts to apply aspects to code, instead of generating the actual code. The aspects would be handwritten.
  19. enerating loads of extra code, like DTO's and such.

    How would aspects change the use of DTOs? How does
    an object get saved without an explicit definition
    of the object or reflection?
  20. > enerating loads of extra code, like DTO's and such. How would aspects change the use of DTOs? How doesan object get saved without an explicit definitionof the object or reflection?
    I can use the object model directly on the client, and along with some other aspects that allows individual properties of the object model to be loaded in an optimized that avoids the use of DTO's, or at least in the way I used DTO's.

    I used DTO's as a way to optimize loading (=only get subsets of objects for client display for example) and state transfer from client to server. Both of these are handled by aspects in my case, so I don't need DTO's.
  21. I can use the object model directly on the client, and
    >along with some other aspects that allows individual properties
    > of the object model to be loaded in an optimized that avoids
    > the use of DTO's, or at least in the way I used DTO's.

    Can you give a specific example? I can't make sense
    of this technically. If you are moving objects to a database
    or across a network then you need to know the structure.
    How do aspects prevent this? Aspects are still on methods
    so the object needs to present and some sort of commonality
    of attributes must still be assumed.
  22. Can you give a specific example? I can't make senseof this technically. If you are moving objects to a databaseor across a network then you need to know the structure.How do aspects prevent this? Aspects are still on methodsso the object needs to present and some sort of commonalityof attributes must still be assumed.
    Sorry, I should have been more explicit. I *do* have POJO's with get/set methods, and this is the structure used both on server and client and everything in between. I.e. what I don't have is one "component" on the server (e.g. EntityBean) and then DTO's/VO's to transfer state between layers.

    With regard to MDA, I suppose these POJO's could be easily created in UML if you wanted to. They're basically just field/get/set tuples. Not sure what it'd buy you though, apart from a fancier way of doing "add 'foo' property to this class".
  23. Have some important concerns regarding 'real world usage' of MDA. If anyone has been using it extensively, especially in large scale applications, please chime in:

    1. Since one generates code from the model, how does one handle 'change requests' ? Do you modify the model and re-gen the code ?
    2. Since there is a huge library of existing code, does one need to 'reverse engineer' the code to create the model ?
    3. Is it possible for the model to go out-of-sync w/ the code ? If so, what does MDA provide to address this ?
    4. Why do we need to keep the model in-sync w/ the code ? Is not the code itself the 'high level model' ?
    5. OMG specifies 'translationist' model (as one of the two). This is defining the complete application in the model, and generating the application/code. Is this possible or even meaningful ? Anybody w/ real-world examples ?
    6. Since iterative processes are in vogue, how does on use MDA in this context ? Does one modify the model or change the code ? Seems there's an over-emphasis in creating and maintaing the 'model'.
    7. Is there a way to 'test' the models for correctness and validity ? I am especially interested in this because ultimately, if we do not have tests, the 'models' are moot. Are there tools/vendors who are doing this ?
    8. OMG claims 'rapid deployment and delivery through trasformation and code generation'. Has anyone experience doing this before using non-MDA tools ? If not, why do we think this would work ?
    9. Most importantly, if an MDA-driven project fails or if a company plans to move from MDA to another 'next-gen' process (a few years from now), can it move it's codebase/models/whatever to the next-big thing ? Put it another way, how tied-down to MDA is my application ? Is it independent (of the process - which is a good thing)?
  24. Will try to answer your questions as best as possible based on my 'real-world' attempts at using MDA....
    1. Since one generates code from the model, how does one handle 'change requests' ? Do you modify the model and re-gen the code ?
    Yes, the principle is that you always change the model and regenerate. In the 100% generation approach this is no problem. In cases where implementation detail has been added to the code you get some interesting problems to overcome. Some tools just ignore this altogether and use a one-off generation then change the code approach, others advise you to write your code in subclasses of the generated classes and others try to insert the added code back into the newly generated sources. This is one area where MDA has a long way to go IMHO.
    2. Since there is a huge library of existing code, does one need to 'reverse engineer' the code to create the model ?
    If the code contains business model classes and so on that you want to use or change then yes, you must reverse engineer so that changes can be re-generated. If you are talking about libraries of frameworks, infrastructure and so on then these don't need to be in the model, but you will need to develop custom transformation to generate code that depends on them.
    3. Is it possible for the model to go out-of-sync w/ the code ? If so, what does MDA provide to address this ?
    Yes, they can go out of sync if developers make changes to relationships of key model components directly in the code. Most MDA tools don't take account of this at all and either go for one-off generation or rely on developer discipline to make necessary changes at the model level.
    4. Why do we need to keep the model in-sync w/ the code ? Is not the code itself the 'high level model' ?
    This is certainly a strong argument: build the model as it stands and test the model then do a one off transformation and from that point on work just with the code. However, this approach only works if you never want to transform the model to a different architecture. As the basis of MDA is that the business model is more stable then the architecture, not keeping the model in sync offers little benefit above what most modern CASE tools already provide.
    5. OMG specifies 'translationist' model (as one of the two). This is defining the complete application in the model, and generating the application/code. Is this possible or even meaningful ? Anybody w/ real-world examples ?
    To actually achieve this requires that you produce incredibly complete, detailed and accurate UML models. I class myself as a strong modeller, but I find it challenging to capture such a complete model given the UML tools and information currently available. Long term as tools and experience increase I think this may be feasible, but as I said in my previous post, how many companies are going to dump their entire programming team and retrain them as object modelling and UML experts in the near future?
    6. Since iterative processes are in vogue, how does on use MDA in this context ? Does one modify the model or change the code ? Seems there's an over-emphasis in creating and maintaing the 'model'.
    I think this is an area that MDA is yet to fully address and will probably be something we discover as more and more projects are done. In theory, MDA should support an iterative modelling approach, but it is not quite there yet.

    Yes, MDA does have a strong emphasis on creating and maintaining the model. That's the whole point! The domain knowledge captured in the model is more important than code and the architecture and frameworks that the code is built upon.
    7. Is there a way to 'test' the models for correctness and validity ? I am especially interested in this because ultimately, if we do not have tests, the 'models' are moot. Are there tools/vendors who are doing this ?
    Most existing modelling tools have the ability to check models for errors and for completeness. Again, I think as MDA and modelling tools continue to improve this will become a significant area of focus. In particular it should allow delaying of transformations and code generations to a fairly late stage in the development process, adding even more value to the modelling stage of the project.
    8. OMG claims 'rapid deployment and delivery through trasformation and code generation'. Has anyone experience doing this before using non-MDA tools ? If not, why do we think this would work ?
    As MDA stands at the current time I would say it is close to delivering on this promise PROVIDED you target a platform and architecture for which there is a well proven and comprehensive transformation provided by the tools vendor. Where the thing breaks down is if you are having to develop transformations in parallel to the model, so you actually have two things to build and test.
    9. Most importantly, if an MDA-driven project fails or if a company plans to move from MDA to another 'next-gen' process (a few years from now), can it move it's codebase/models/whatever to the next-big thing ? Put it another way, how tied-down to MDA is my application ? Is it independent (of the process - which is a good thing)?
    Your model is recorded using UML and possibly other standard languages/notations and your generated code will be in a target language such as Java/C#/C++. Therefore, why should failure or movement be any more painful than it is in a non-MDA world. In fact, MDA should be an advantage as it allows you to carry your models forward and then to write new transformations to target new languages, patterns, architectures and so on.

    I personally would say that 'real world usage' of MDA is at a sort of half-and-half stage at the moment. If you are confident about the modelling skills and discipline of your team then you should find it successful. If there are any doubts then a few experimental and learning projects are probaly essential before putting it on any-thing mission critical.
  25. First of all, the article writes off Microsoft as a participant in the MDA space. Not true. Their "White Horse" product is the first step towards MDA.

    Second, OMG's MDA has two flaws, both of which are easy to fix.

    1. PSM are almost never required. Shy away from them. Your code generator tool should be the PSM!

    2. Do NOT use OCL. Use plain Java or C#.

    Also, you should use MDA with respect to a domain specific reference architecture.

    Our shop uses MDA (with the above modifications) extensively with great success. Less bugs and MUCH better productivity. We generate Java, C#, WebServices interfaces, C++/COM components, OR mappings, interface and test documentation, etc, etc.

    I'll never go back.
  26. Do NOT use OCL. Use plain Java or C#.
    Could you elaborate a little bit more on this ? I don't know that many people having hands-on experience with OCL so I hardly hear any arguments against it. What is wrong with OCL ?

    -- Wouter
  27. To the average UML modeler today, OCL seems complex. However, I think tools can ease the pain of doing detailed MDA models using OCL. I'm interested in seeing some of the things Compuware has to offer.
  28. Han,
    probably off topic but what tools are you using to acheive this ?
  29. Han, probably off topic but what tools are you using to acheive this ?
    In-house tool. It took us two months to craft and has saved us an enourmous amount of time and money.

    The process is quite simple:

    1. Export models to XMI
    2. Generate code using a code generation framework that maps from XMI DOM to Code DOM which in turns drive Java, C++, .NET, DDL, RTF, etc "artifact" generators. That's all you need to do structural stuff in a portable way.

    With this framework, it typically takes us one or two days to write a domain-specific generator. We also make the generator configurable so we can reuse it in other projects.

    Be pragmatic. Throw away the MDA books. Throw away OCL - it's a functional language in an imperative world.

    Learn how to generate the structural stuff first. This includes classes, database schemas, O/R mapping specifications, configuration files, layer/factory/etc pattern constituents, constants and so on. Structural stuff quite often comprises > 50% of the application and is pretty easy to do, even with roundtrip support. Behavioural stuff is *much* harder, but doable in many cases. The trick, as I've mentioned before, is to generate code into a well-defined reference architecture.

    Learn how to use and utilize UML profiles, i.e. stereotypes and tagged values. Extremely powerful. For instance, for one generator we have we tag some classes as "corba" and "ejb", possibly both. The generator creates the appropriate factory classes and the necessary framework based on these values. The result is correct, consistent and comprehensive.

    We have recently found that inversion-of-control containers are perfect frameworks for MDA and they are easy to migrate to many platforms.
  30. What MDA is really about...[ Go to top ]

    Perhaps we should call this approach "lowerCASE MDA." It captures the spirit of MDA, even if it does not follow the letter of the spec.

    The simple models Han uses are more abstract than the code he intends to generate. He achieves separation of business application design and technology design. The business goes in the model and the technology (architectural decisions) goes in the code generators. Han raises the level of abstraction and gains productivity.

    You should know, before I continue, that I work for Compuware, and in fact I work on OptimalJ, and ("but wait, there's more!"), I work on some of those specs in the OMG. I think a few points need to be clarified regarding MDA.

    Why PIMs? The better question to ask is, Why transform models into other models? Let me explain. A Platform Independent Model is a Platform Independent Model when it is the source of information for the creation of a more specific model, that is, a Platform Specific Model (PSM). So why should we generate another model? Why not go straight to code?

    The act of transforming adds refinements to the information contained in the PIM with the specific purpose of adding some of the details of implementation choices. Consider the classic example of POJOs atop a relational database. I need information about how to take a set of modeled classes and map them to a relational database, and I need to map them to some implementation language.

    In the RDBMS mapping, I need to worry about details like indexes and primary key constraints and attribute lengths. Information you'd normally put in a physical data model.

    When I map the simple class model to POJOs in Hibernate (for example), I now need to think about things like change listening and collection management. I also need to write HQL and create mapping files.

    If I generate straight from the PIM, all of the information I need must be there, or else I have to add it in with tags. If the information is not in the model, then the decision must go in the code generators. The burden must fall somewhere.

    Looking at the first choice, you wind up with a PIM loaded down with tags and environment specific information. This results in a model not independent of a class of platforms the way it ought to be, but specific to all of the target platforms. In other words a tangled mess of angle-brackets.

    At the other extreme, you have code generators that make brilliant decisions about how to handle some extremely terse model information. Personally, I might initially find this more fun (I get to write really slick code), but eventually, the effort required to write code generators becomes burdensome.

    So let's go back to the example I was setting up. You have two models that are two separate, yet related, refinements of a single PIM. Some of the platform specific decisions can be made in the transformations themselves. For example, the transformation can make default decisions about how to map between my domain class and my DMBS table. The models I generate code from are cleaner now, and contain more specific information for the code generators to work with. Even better, I can edit each specific model to override the default refinements made by the transformation. My code generators can add in the gory details of logging code and mapping file generation, but the decision of whether to use table-per-class or table-per-concrete-class mapping is already spelled out in the model and plain to see.

    In large scale development efforts, making incremental refinements like this can be very important. I'm not saying Han should go out and buy OptimalJ (although I naturally wouldn't mind in the slightest), nor am I saying he should implement model transformations in his "lowerCASE MDA" approach. He gets enough out of what he's got and I think that's great. But there is a use for model transformation, and tools that help you implement this can be that next step beyond simple generators to generative, model-aware systems of development, deployment and feedback.

    So what about keeping the code (a model itself, incidentally) synchronized with the model(s)? I'm of the opinion that AOP has a great deal to contribute here. Imagine that you simply refine the generated code by adding in aspects. So instead of "your code here" blocks, you get a signature through which you may offer advice. Very powerful.

    But if you're stuck generating to something just a bit more coupled, then the burden is on the tool providers to solve the problem of understanding the code well enough to reverse the abstraction process (think precompiliation environments). OptimalJ doesn't do this yet. We took a much simpler approach: guard and free blocks; one of our many nods to pragmatism.

    I've given you what the author called the elaborationist point of view. What about the translationist point of view? I honestly think they are aimed at a harder problem. If generating code means manufacturing hardware, you actually do need to be able to simulate and test models. It becomes prohibitively expensive not to do so.

    I don't think this should raise alarm bells in regard to MDA, though. I'm of the opinion that this should be no more worrying than the fact that trucks use different engines than cars. Both of them benefit from the principles of internal combustion, paved roads, and reasonably consistent traffic laws. If I have a tool that I use to model and simulate the control software of the F22 Raptor, do I really care that I can't export the models and import them into ArcStyler? I'm going to use products like OptimalJ or ArcStyler to build business applications and I'm going to use a tool that specifically targets real time or embedded industry segments if that's the sort of work I'm doing. Tools aimed at a specific market will provide things over and above the standard that wouldn't make sense or would have less value in some other industry segment.

    What about interchange? Well... The specifications aren't complete, some don't even exist in draft form. The existing standards allow tool vendors, like Compuware, to give you working MDA systems. But this means we've got to fill in the blanks. Implementing model transformation or a templating language for code generation are just two examples of this. As new standards come to completion, it will be the customer's job to demand adherence to these standards.

    Until then, don't reject the ideas of MDA out of hand. And because I've written code for OptimalJ, I've got to say, don't reject the current crop of tools out of hand either. Creating your own code generation system from scratch can be difficult. Have a look at Code Generation In Action, measure the work involved and decide for yourself if it would be less expensive to buy a COTS code generation / model transformation system like OptimalJ.
  31. What MDA is really about...[ Go to top ]

    First of all, great post Michael.

    Quick question Mr. Murphree, does Optimal J have a Transformation Definition Language that is exposed to the tool user. So that the developer can develop her own homegrown model transformations? If you have this functionality, is it available in all additions?
  32. What MDA is really about...[ Go to top ]

    OptimalJ doesn't have a language for transformation, unless you count Java. It does, however, have a framework you can use to create transformation definitions. Internally we call the incremental copier framework. In our documentation it is called a technology pattern or technology transformation.

    The framework itself allows you to write reasonably simple Java methods that act as declarations of rules. These rules get picked up by the framework and invoked through reflection.

    Transformation in OptimalJ is a two phase process, hence the term incremental. First, the mapping rules are written out to the model. For example, you write a method that declares that when the copier framework encounters a domain class in the PIM, it should map to an EJB entity component in the PSM. OptimalJ finds this declaration based on the signatures of source and target model element classes, and writes down (in the PSM) that there is a mapping relationship between domain class Customer and entity component Customer.

    Then phase two kicks in. We refer to this as the copying of structural features. Setting the name of the model element, setting properties specific to Customer... etc.

    User adjustments to the PSM are protected through the use of a Regenerate property. By adjusting this property, the user has additional control over which properties of any given model element may be set or altered (including collections of child elements) when the transformation is triggered again.

    There's more to it, of course, but those are the basic principles. Users may write their own transformations, excuse me, "technology patterns," (I think we went pattern happy when we named these things). We had to provide this because users may also create there own metamodels. OptimalJ is a full MOF model authoring system (if you get the Architect Edition). We also provide a framework for model verfication (deferred model constraints).

    Compuware is actively participating in the MOF-QVT (Query/View/Transformation) effort within OMG. When a standard is settled on, we'll move to implement it. That said, many of the principles the standard requires are already found in OptimalJ.

    BTW: Because those mappings are model elements, they are available to the code generators, should that information be needed.
  33. What MDA is really about...[ Go to top ]

    If I generate straight from the PIM, all of the information I need must be there, or else I have to add it in with tags. If the information is not in the model, then the decision must go in the code generators. The burden must fall somewhere.
    This is how we do it:

    1. Any business-related tags go into the model. So does *architectural* decisions. To make this work you have to have an UML profile understanding your architectural framework (be it .NET, EJB, Corba, WebServices, whatever...) Hopefully, these profiles will be standardized someday - an activity that OMG should consider important these days!

    Even for complex applications, we find that very few tags and stereotypes are actually needed. We do support tags at all levels, but usually only need them at package and class levels. Sometimes at the property/attribute/operation level, i.e. to mark a property as "key" or "unique" if we generate O/R mapping code. In the latter case we sometimes also mark n..* associations as "bag", "list", "set" (default) and so on. The appropriate database constraints are thus generated, along with proper collection classes and domain object navigators.

    2. Any technical decision goes into the code generator configuration. We do this configuration visually. Our most complex generator has less than 10 configuration settings.
  34. What MDA is really about...[ Go to top ]

    Han,

    That sounds like you have a very nice, functional, practical approach. How long did it take you to put your framework in place?

    Regards,

    Michael
  35. What MDA is really about...[ Go to top ]

    I agree with the other poster - good explaination. In fact, fire your sales people. They need to do this good of job with this much understanding.

    So how does OptimalJ (and this question goes for other tools so chime in) deal with the UI? I hear talk of model/domain/persistance, but not of the view. It seems odd, but I spend most of my time exposing the domain. (I know the NakedObject people will jump in here and I wish I could use it) I think I have seen struts generators. But since there is a push towards rich client how do I do Swing/SWT/JSF/Echo/WinForms/WebForms... ?
  36. What MDA is really about...[ Go to top ]

    Mark,

    OptimalJ generates a web tier based on Struts and the business delegate pattern. Up through OptimalJ 3.1 the UI elements were fairly abstract (naturally). By default, when we generate the presentation PSM, we create a web component model element which represents a collection of actions and JSPs.

    For domain classes in the PIM you get a "maintenance" component in the PSM from which OptimalJ generates search/browse/edit/new/associate JSPs and actions that use what we call Business Facades. Business Facades use a service locator to talk to the generated EJBs. Each explicit task in the maintenance pattern actually gets several action classes created for it (preinvoke, invoke, postinvoke).

    The downside of this is that it's mildly surprising to get so much code for a single 'thing' in the model.

    For CRUD applications, OptimalJ gives good attention to some of the details in the model. For example, when you look at the generated edit page for Customer, you automatically get links or buttons that allow you to select or create associated entities like Call.

    What's less than thrilling about this, however, is that you can't model the workflow, or pageflow you want and have it generate this. We're fixing this problem in 3.2 (the next release due out very soon). We allow you to have much greater control in the model over how your web UI comes out. Imagine a visual Struts config editor on steroids.

    Personally, I think we also ought to look into something like XUL or NakedObjects. Although if you're using NakedObjects, MDA is probably significant overkill. But if you wanted to apply the Proxy Abstraction Control (PAC) UI pattern like NakedObjects, only with Web technology, for example, code generation starts looking more attractive again. Hmmm... PACXUL... Sounds like a prescription drug...

    Presentation, ironically, is one of the glaring omissions of UML. I can model the structure of the classes that make up the UI, and even model how they interact. But I can't model (at least not with UML) the physical layout and navigation characteritics of a UI. At some point I hope we (the industry "we") settle on something like the Windows Diagrams from Fundamentals of Object-Oriented Design in UML. These are basically lines, boxes, text and arrows. Granted, something like PAC makes Windows Diagrams something auxilliary to your UI construction.

    Regards,
    Michael
  37. What MDA is really about...[ Go to top ]

    Mark,OptimalJ generates a web tier based on Struts
    I thought it was OptimalJ. But I am moving away from Struts and the browser as a UI where I can. And as fast as I can.
     I can model the structure of the classes that make up the UI, and even model how they interact. But I can't model (at least not with UML) the physical layout and navigation characteritics of a UI.
    And that is where I spend most of my time.
  38. http://www.jroller.com/page/pawnxing/20040512#mda_the_big_kid_in
  39. While I do not have enough information at present to pass judgement on MDA, I am a believer in program generation/transformation and do think that MDA is an idea whose time has come. Unfortunately there are a couple of misunderstandings of program generation in the article. Here is one of them

    "Consider also what it means to model in an environment where there is no platform. For example, how would you send an email? The programmer will need to define an abstraction of this email sending service first, then write a set of transformers to generate a realization of that service in the PSM and code. This seems to me a somewhat pointless reinvention."

    This is basically wrong because it ignores the fact that even if you weren't using MDA *someone* has to have identified the abstraction of the email service. Typically this is the programmer who then goes and codes that abstraction directly in implementation platform terms. Often this attempt to code and identify the abstraction simultaneously imposes an extra burden on the programmer and we see the consequences in software that then has to be debugged by the end user. With MDA, and generative programming in general, the two are done seperately, with the abstraction being maintained more explicitly as a model or spec. There is no "pointless reinvention", and no freebie. Someone has to do the work.
    cheers
  40. Ever Read the Java API?[ Go to top ]

    "This is basically wrong because it ignores the fact that even if you weren't using MDA *someone* has to have identified the abstraction of the email service. Typically this is the programmer who then goes and codes that abstraction directly in implementation platform terms."

    Last I heard Java had defined an API for mail. Although it's a PSM, to have to redefine this as a PIM seems to me ridiculous.

    I too find it quite ridiculous when existing APIs in our implementation platform have to be 'ported up' to the PIM just so we can have a UML diagram. I find it much simpler to reverse engineer them into a UML tool so I can deliver the pretty pictures to the architects - who don't seem to be capable of studing the Java API before redesigning a PIM.

    Also, doesn't Java provide platform-independence? I've coded on an NT4 workstation and run the same JAR (an MQSeries client) on an OS/390 (under USS), Solaris, AIX, Linux, NT, and Windows CE. I believe I can live with this level of platform independence.

    IMHO, what MDA needs to succeed is a definition of all possible abstractions (PIMs), so we don't have to reinvent the wheel. Good luck getting OMG to get consensus on what the ideal models are, since the world last I recall isn't standing still, and vendors who deliver to a platforms seem to like to frown on the idea of platform portability. Thus, although I find the concept of MDA worthy, reality makes me skeptical of its future.
  41. Ever Read the Java API?[ Go to top ]

    "This is basically wrong because it ignores the fact that even if you weren't using MDA *someone* has to have identified the abstraction of the email service. Typically this is the programmer who then goes and codes that abstraction directly in implementation platform terms."Last I heard Java had defined an API for mail. Although it's a PSM, to have to redefine this as a PIM seems to me ridiculous..
    And if I want to develop in C++? Or Perl? Or for Embedded .Net? Or ...
    I too find it quite ridiculous when existing APIs in our implementation platform have to be 'ported up' to the PIM just so we can have a UML diagram. I find it much simpler to reverse engineer them into a UML tool so I can deliver the pretty pictures to the architects - who don't seem to be capable of studing the Java API before redesigning a PIM.
    And so it is. If you assume Java as your starting point. But that's just a consequence of code-centric thinking

    <Up to a point, yes. But we've seen that APIs that try to be truly platform independent end up being unwieldy and slow (eg Swing). There's a reason for this. The API is trying to do too much. Not only is it exposing the conceptual abstraction. Its also struggling to cover up the idiosyncracies of the platform it is operating on, and not only that but unify those idiosyncracies of several platforms. The only way to do this is to move up to a least common ancestor.

    <<I've coded on an NT4 workstation and run the same JAR (an MQSeries client) on an OS/390 (under USS), Solaris, AIX, Linux, NT, and Windows CE. I believe I can live with this level of platform independence.>>
    I think you've been fairly lucky.

    <I have no idea what you mean by all possible abstractions. That to me sounds like all possible sets, or all possible programs.
    Good luck getting OMG to get consensus on what the ideal models are, since the world last I recall isn't standing still, and vendors who deliver to a platforms seem to like to frown on the idea of platform portability. Thus, although I find the concept of MDA worthy, reality makes me skeptical of its future.
    Ah politics. That's an entirely different story. In the end MDA will probably end up being driven by politics, the way the UML standards already are. So in that sense I too am sceptical. However, a good idea will always struggle to get out from under the cover of politics.

    cheers
  42. And if I want to develop in C++? Or Perl? Or for Embedded .Net? Or ...
    Read the Java API before you reinvent the wheel. Java seems to be in general consensus the most network-aware language with the richest set of APIs. Implement in .NET, but why reinvent the wheel to define a Collections API? Ever wonder why c# is fundamentally an copy++ of Java?
    And so it is. If you assume Java as your starting point. But that's just a consequence of code-centric thinking.
    Java seems to offer the only running virtual machine acrois most 'platforms'. If one is coding to reality and coding network aware programs, one should stick to this level of abstraction and earn some money rather than be thought of as a theorist/dreamer. When MDA systems has been up and running for 9 or so years and it's proven to work, I guess I'll move up a meta-level.
    Up to a point, yes. But we've seen that APIs that try to be truly platform independent end up being unwieldy and slow (eg Swing). There's a reason for this. The API is trying to do too much. Not only is it exposing the conceptual abstraction. Its also struggling to cover up the idiosyncracies of the platform it is operating on, and not only that but unify those idiosyncracies of several platforms. The only way to do this is to move up to a least common ancestor.
    Is this not what MDA is trying to do? The point you make (well taken) is that an abstraction that should run across all platforms (like MDA models, like Java), will never perform as well as one targeted to a platform.
    I think you've been fairly lucky.
    Not really lucky, just bright enough to code to a virtual machine that runs on most 'platforms'.
    Ah politics. That's an entirely different story. In the end MDA will probably end up being driven by politics, the way the UML standards already are. So in that sense I too am sceptical. However, a good idea will always struggle to get out from under the cover of politics.cheers
    Business and politics seem to rule the day, rather than reason, doesn't it.
  43. And if I want to develop in C++? Or Perl? Or for Embedded .Net? Or ...
    Read the Java API before you reinvent the wheel. Java seems to be in general consensus the most network-aware language with the richest set of APIs. Implement in .NET, but why reinvent the wheel to define a Collections API? Ever wonder why c# is fundamentally an copy++ of Java?
    I have no problem with anything you say here. But, at the very least you have to admit that C#,C++, and Java have syntactic differences. Like it or not you are going to be coding the abstraction three different ways (and not just syntax, e.g. the lack of garbage collection in C++, the lack of templates in pre-1.5 Java, etc). However, as you have rightly observed, those three implementations are going to be very similar. The similarity is because they are presenting the same abstraction in slightly different syntaxes and language models. MDA just takes that one step further, takes that abstraction that his just dying to get out (to take your example, a Collections API) and models that abstraction so it is truly independent of whether you're going to C# or Java (with the help of an Action Semantics language). The beauty of it now is that now you are well protected. The same model coupled with a language specific backend can take you to either yesterday's or tomorrow's language du jour.
    And so it is. If you assume Java as your starting point. But that's just a consequence of code-centric thinking.Java seems to offer the only running virtual machine acrois most 'platforms'. If one is coding to reality and coding network aware programs, one should stick to this level of abstraction and earn some money rather than be thought of as a theorist/dreamer. When MDA systems has been up and running for 9 or so years and it's proven to work, I guess I'll move up a meta-level.
    I don't blame you for wanting to wait and see how MDA shapes up. With everyone jumping on the bandwagon its difficult to predict how things will shake out. But the premise is fundamentally sound. There is nothing airy fairy theoretical about it. As the author himself admits there are working translation based MDA implementations out there, albeit for limited domains.
    Up to a point, yes. But we've seen that APIs that try to be truly platform independent end up being unwieldy and slow (eg Swing). There's a reason for this. The API is trying to do too much. Not only is it exposing the conceptual abstraction. Its also struggling to cover up the idiosyncracies of the platform it is operating on, and not only that but unify those idiosyncracies of several platforms. The only way to do this is to move up to a least common ancestor.Is this not what MDA is trying to do? The point you make (well taken) is that an abstraction that should run across all platforms (like MDA models, like Java), will never perform as well as one targeted to a platform.
    No MDA is not trying to do this. Unlike in Java, in which the *same* compromise solution has to *execute* fairly efficiently on each platform, in MDA the model does not run on any platform (it may be executable - but that is just a verification artifact). You only get an executable solution by virtue of mapping it to the specific platform. The cleverness then is in the mapping. Note how the problem has been divided up into two pieces that are tackled seperately. One, the platform independent model deals with getting e.g. trees correct. The second deals with executing it well on each platform.
    I think you've been fairly lucky.
    Not really lucky, just bright enough to code to a virtual machine that runs on most 'platforms'.Most, but not all. That's why I say lucky.
    Ah politics. That's an entirely different story. In the end MDA will probably end up being driven by politics, the way the UML standards already are. So in that sense I too am sceptical. However, a good idea will always struggle to get out from under the cover of politics.cheers. Business and politics seem to rule the day, rather than reason, doesn't it.
    Sadly, yes. Every now and then the right thing seems to happen, but only if we are vigilant.

    cheers
  44. A visual programming language?[ Go to top ]

    The only think I know about MDA is what I read in this articel. But to me, it sseems like MDA is just a programming language like Java and C#. Only difference is that MDA is has visual "source code" (and doesn't have a complete syntax and semantics yet).

    Tanslating (visual) code to new technologies is not new either. You can translate COBOL source code to Java source code if you want to... Only problem is, that COBOL isn't OO, so the generated Java-code you get isn't OO either. Same problem applies to MDA if we get a new non-OO paradigm.

    Code generation is very cool, if you find that you are doing a lot of repetetive code. But it seems MDA is aiming even higher, by trying to define the ultimate (visual) language, that will render all other (even future) languages superfluous... get real!
  45. MDA In a Nutshell[ Go to top ]

    Aspect-Oriented Programming is not OO. Does that make it bad? MDA is not Object-Oriented Software Construction, but it certainly builds upon it through inclusion of concepts, in much the same way that AOP builds upon OO but is not itself OO.

    Model Driven Architecture is another way of separating concerns and putting the DRY principle into practice. The fundamental problems OO was once intended to solve, haven't actually been solved by OO because it is so frequently misused.

    Consider: Breaking code out into data structures and functions was considered conceptually harmful because it often led to unmaintainable systems (the canonical example of this being the magic switch/case statement).

    So along comes OO where your modules are divided up along the lines of your data structures, and, in fact, your modules "embody" (encapsulate) the data. The whole purpose being to 'have a single authoritative representation of knowledge in the system.'

    At its core, OO is about how to organize your lines of code. But take just a small step back, and start thinking in designs. Now, its about classifying and grouping things by which messages they respond to. Organizing collaborations of these things (ultimately rendered as lines of code) in to recognizable patterns allows us to label design features. So if I say to you that I see subsystem X uses the bridge, adapter and abstract factory patterns, you have a general idea of how that subsystem is built and what kinds of classifications of things that can receive the same sort of messages exist in the underlying code (see how I neatly avoided saying meta-meta-meta...).

    Modeling (of the sort done in MDA) takes this organization and classification a step further. So let's follow this abstraction process.

    Runtime Objects
        | (described by)
    Classes collaborating
        | (labeled as)
    Pattern instances
        | (classified by)
    Patterns

    But now you're designing an architecture. An architectural design is the repeated application of a design technique (pattern) to elaborate a (business) domain concept. So, for that Customer thing, you apply the factory, and the facade, and the... etc. Then for the Product thing, you apply the factory, and the... You get the idea. The design is expressed repeatedly throughout the implementation.

    So here we are, accomplished OO programmers all, and while our code doesn't violate the original notion of the DRY principle, we find that the overall body code does in fact, contain repetition of concepts orthogonal to the domain. We've repeated the work of representing the architecture as design expressions over and over and over. Worse, we can't reuse that design because there is no handle (mental or computational) by which we can grab that design and apply it to a new part of the application, or a different application entirely.

    We need some way to make architecture roughly orthogonal to the code. In other words we want a way to represent knowledge of a system of design decisions (an architecture) in a single authoritative place.

    Frameworks, APIs, and middleware only go part of the way to solving the problem, while creating problems of their own. Let's take Struts for example. For better or worse, Struts enforces an architecture on part of your application. You have a MagicServlet traffic cop, interpreting an XML file, dispatching to the appropriate action object, which forwards to some JSP.

    The problem is that while Struts can manipulate any given combination of action/forwarding-chain/view assembly, it doesn't help you elaborate your domain concepts (through the application of patterns) to this implementation in a consistent manner. In other words, we're not working in bare Java, but we must still repeat design knowledge, where the target is Struts, instead of just the Java APIs.

    So how do we satisfy DRY and separation of concerns across multiple levels of concern? MDA is one way of doing this for multiple levels of abstraction.

    Let's start at the bottom.

    Assumption #1: As a default condition, you want architectural decisions for a particular class of problem, for a particular target platform (Struts, EJB, DBMS, POJO + Hibernate, whatever) applied consistently.

    Assumption #2: As a default condition, you want the design of the collaborating elements of your application to be represented in a way that is not dependent on the details of the implementation, but does have generic representations of things that are conceptually the same across multiple implementations. Top illustrate: I do want to represent things common to relational databases like tables and foreign keys when I'm designing my relational database, but I do not want things specific to Oracle 8i and PLSQL.

    So I have implementation (Oracle 8i-specific PLSQL), I have the DBMS-specific yet implementation independent representation of my domain, and I have the mapping decisions (application of an architecture) that go from implementation (platform) specific to implementation specific representation (code).

    For the sake of preserving my fingertips, lets call each of these three kinds of artifacts models (I'll justify that in just a bit).

    There is one more problem. I don't just want relational tables. I want Java objects with Hibernate mapping files. I want Velocity pages for my UI. So I have a representation system, *cough*, language, where I can represent my Customer as a relational database artifact set, but I also want to represent Customer as a simple class with the attendant mapping to a POJO and Hibernate. And, because I am the demanding sort, I also want some way to represent Customer as a templated UI component with a mapping to a collection of Velocity pages. OK, now I have three languages, six models (DBMS, DBMS to Oracle, simple class, simple class to POJO/Hibernate, templated UI (a.k.a. pull-MVC if I remember correctly), and templated UI to Velocity), with knowledge of my Customer thing repeated three times. Arrrrgggghhhh! Wait, I can just do that trick again!

    Now I need a language that represents concepts common to the style of application I've selected. So I need something that represents, in generic terms, the things in a domain-object architecture. In that language I will represent my Customer thing once, I will then map out the translations between this high-level domain model and my more platform-specific models. Each of these mappings necessarily contains design decisions, but each design decision is implemented once.

    We've represented software, and all sorts of repeatable decisions about that software, in this system of models. We've devised a system of transformations for our application system that allow consistently apply the design. But how do we consistently apply this approach across multiple projects (DRY again!)?

    We need a consistent, and more pointedly, standardized way of doing the following things:
    1. Represent (modeling) languages, whether platform-specific, or platform-independent
    2. Represent transformations (mappings, elaborations or transformations) between those models
    3. Represent application of implementation decisions (code generation) from platform-specific models

    Within the OMG, we are working on standards for these things. MOF (the Meta Object Facility) gives us a standard way of representing metamodels, that is, representing models of modeling languages. That takes care of #1. #2 and #3 will be taken care of by the MOF-QVT and MOF-to-Text standards when they are completed.

    That, in a nutshell, is MDA. The fact that you can draw UML diagrams in the context of this system of models, is just an extra added bonus. This is not visual programming.

    Regards,

    Michael Murphree
  46. MDA In a Nutshell[ Go to top ]

    MDA is not Object-Oriented Software Construction...
    Are you suggesting that JavaGen, AndroMDA, JMI, QVT, etc, aren't object oriented? MDA embodies OOA/D/P, so what you say would need a special explanation.
  47. MDA In a Nutshell[ Go to top ]

    Object-oriented software is object-oriented because the fundamental unit of decomposition is the object. In an MDA approach, the fundamental unit of decomposition is the model element, a graph node (don't hit me, I'm not actually a math geek).

    Don't get me wrong, the prime target for MDA tools that produce business applications [plug]like OptimalJ[/plug] is OO code, but in a mathematical sense, MDA is a superset of OOSC.

    So JavaGen, AndroMDA, and JMI are implemented in and operate in the context of OO software. In other words, they use or include OO concepts and techniques. MOF-QVT, on the other hand, will operate on model elements. I can guarantee to you that at least one of the implementations of QVT in tools will use object technology, but QVT can be implemented in functional or procedural (imperative) languages also.

    You can certainly build OO systems with MDA, and indeed you're modeling classes with operations, attributes, and associations. But MDA is not the same thing as OO. I hope that clarifies my statements.

    Regards,

    Michael