Discussions

News: Beyond Java

  1. Beyond Java (770 messages)

    Rumor has it that after Bruce Tate and Justin Gehtland finished working on a web application written in Spring, Hibernate, and Webwork, they were able to recreate the application in Ruby in 4 nights. From that point on Bruce and company began questioning Java's applicability in different problem domains. The book Beyond Java came pouring out after several epiphanies from this Ruby porting experience.

    Read more about Bruce's latest in this article.


    Could Ruby and frameworks such as Ruby on Rail be the next big thing?

    Threaded Messages (770)

  2. beyond... what?[ Go to top ]

    I couldn't comment on RoR since I've never used it. And it's just a matter of personal taste indeed, but I find its syntax, such as
     link_to_remote "Delete this post",
         :url => { :action => "destroy", :id => post.id },
         :update => { :success => "posts", :failure => "error" }
    ...well, let's say I find it "aesthetically challenged".

    Nevertheless, it's not about aestethics. There's more to Java than the language, VM or even web frameworks. Namely, a lot of APIs, tons of premium quality open-source code, not to mention such mundane things as talent availability, industry momentum and vendor commitment.

    What I do see is that RoR tries persistively and desperately pretend it _is_ a next big thing. Well, it is next, but not that big. I admit there might be some reasonably good ideas in RoR, but to overthrow Java it would need to become much more than it is now.

    So far, when I need to make quick'n'dirty webapp, I'd rather resort to PHP. Especially when it's likely that the customer needs to maintain the application for a while without me.
  3. Not even the next thing[ Go to top ]

    What I do see is that RoR tries persistively and desperately pretend it _is_ a next big thing. Well, it is next, but not that big. I admit there might be some reasonably good ideas in RoR, but to overthrow Java it would need to become much more than it is now.

    Agree, but RoR is not even the next thing. This is nothing new but the old MDA approach. We have this domain model-driven CRUD framework implemented in Java for over a few years. All I could say is that RoR, at its current form, is still like a toy needing a lot more fine-tuning. And CRUD operations are just a small part of a real-world app. In fact, it is becoming less significant part in our whole system.

    However, the Ruby and RoR people seem to have succeed at driving away the .Net advocates from this forum.
  4. Beyond Java[ Go to top ]

    It looks like there is nothing to write about or nobody buys books about old good things and writers are looking for a next big thing.
  5. Beyond Java[ Go to top ]

    +1
  6. Beyond Java[ Go to top ]

    they were able to recreate the application in Ruby in 4 nights...

    Right then. Someone then says 'Sorry wrong database - we need this on a different vendor's system and it will need to cope with future schema changes'.

    Rapid development does not always (or I would say 'often') outweight long-term maintainability.
    Could Ruby and frameworks such as Ruby on Rail be the next big thing?

    No, of course not. The reason is that there never has been a 'next big thing'. In spite of hype and epiphanies IT has always moved on by evolution not revolution. There have been many revolutionary languages (Algol, Simula, Smalltalk, LISP etc) but these almost never become general purpose. Instead the mainstream adopts their good ideas, often in a diluted form.

    Also, I have to say that the article does contain some statements which are, at the very least, highly questionable:

    -Java is moving away from its base. Hardcore enterprise problems may be easier to solve, but the simplest problems are getting harder to solve.

    Exactly how? I have found the much of Java 5.0 has made the simplest problems far, far easier to solve, with less verbose and faster code. There is nothing at all about Java that changed so as to make simpler problems harder to solve.

    -Java is showing signs of wear, and interesting innovations are beginning to appear outside of Java.

    As Spring, Hibernate and other technologies start to move into the mainstream of Java use, and as the latest release of Java (5.0), which has more changes than any release of Java for years, moves into mass use, saying Java 'is showing signs of wear' seems a bit strange.

    With new and exciting systems like real-time Java showing practical usefulness, to say that Java is lacking in innovation seems also somewhat far from what I see as reality.

    Java has shown an ability to adapt and add new features. It will continue to do so.
  7. Beyond Java[ Go to top ]

    Right then. Someone then says 'Sorry wrong database - we need this on a different vendor's system and it will need to cope with future schema changes'.

    You're not too familiar with RoR are you? It uses an O/R layer, similar in some ways to Hibernate, to hide DB specific code/queries from the application developer. In addition, its "out of the box" functionality make it so that a schema change is automatically reflected in the UI, with no code change at all. So if you add a column to table and have a web page that lists entries from that table or one that shows details about a row of data, this pages will show the new column without any code being written, or even any configuration files changed.
    Rapid development does not always (or I would say 'often') outweight long-term maintainability.

    And long-term maintainability is grossly overvalued. I don't know how many times I've seen projects where the architecture was designed for the application to last for decades and wither any "big" changes in business model. Then the inevitable "big" change in business model happens. A quick adjustment is either attempted unsucessfully, or it is just recognized that major code changes are going to be needed despite the original grand architecture.

    I've seen this happen to big software products. I've seen it happen to numerous Fortune 100 companies, not to mention smaller companies and start-ups as well. In almost every case, they would have benefitted more from a rapid development than from "flexible" architecture. It's these real-world lessons learned that have spurned the development of things like RoR.
    Exactly how? I have found the much of Java 5.0 has made the simplest problems far, far easier to solve, with less verbose and faster code.
    Have you seen some of the great new Java 5 syntax? How about we compare the signiature (from the Javadoc) for Collections.binarySearch(). Searching a collection for something is pretty simple task, right? First the old JDK 1.4.2
    public static int binarySearch(List list,
                                   Object key)
    Now JDK 5.0:
    public static <T> int binarySearch(List<? extends Comparable<? super T>> list,
                                       T key)
    How can that be described as anything but more complex? Other examples of extra complexity introduced in Java 5.0 are annotations and the concurrency packages. As an experienced developer, I welcomed these additions as they provided extra power to the programmer. But you must recognize that for somebody new to Java, these things can look like voodoo.
    As Spring, Hibernate and other technologies start to move into the mainstream of Java use, and as the latest release of Java (5.0), which has more changes than any release of Java for years, moves into mass use, saying Java 'is showing signs of wear' seems a bit strange.
    Actually it's the emergence of projects like Spring and Hibernate that showcase Java's age the most. If you wanted to teach newbie programmer to write a simple web application, would you just tell him to learn JDBC and JSPs? Or would you tell them to learn JDBC and JSP, then JSTL, then JSF, and then EJBs? Or would you tell them to learn JDBC and JSP, then Struts, then Hibernate, and maybe then also Spring? Struts, Hibernate, and Spring all came about because of the shortcomings of "standard" Java technologies. It winds up being a huge stack of technologies to learn and you really see the age of things as you see how these incremental technologies cover up the wrinkles of the standard technologies they were built on and complement.
  8. Beyond Java[ Go to top ]

    Actually it's the emergence of projects like Spring and Hibernate that showcase Java's age the most. If you wanted to teach newbie programmer to write a simple web application, would you just tell him to learn JDBC and JSPs? Or would you tell them to learn JDBC and JSP, then JSTL, then JSF, and then EJBs? Or would you tell them to learn JDBC and JSP, then Struts, then Hibernate, and maybe then also Spring? Struts, Hibernate, and Spring all came about because of the shortcomings of "standard" Java technologies. It winds up being a huge stack of technologies to learn and you really see the age of things as you see how these incremental technologies cover up the wrinkles of the standard technologies they were built on and complement.

    No, it is the emergence of Spring and Hibernate that show Java's maturity. There is NOTHING in Java language that is a shortcoming that is addressed Spring or Hibernate. I would say that the strength and vitality of Java is demostrated by the fact that it can successfully be all things to all people something Ruby or ROR hasn't proven.

    By comparision Ruby is narrow and IMO restricted. And if the day ever comes with it can match Java's breadth of technologies, I suspect that it will look every bit as complex as Java does today. It is easy to look simple when a given tool doesn't do as much. That's why I've never bought into the arguements about why PCs cannot be as simple as toasters. A PC is more complex and can do more.

    Again, many of these arguments are the same that were used for every scripting language that challenged the static accepted brother and as we've seen time and time again, the only thing that replaced a static langugage as another static language.
  9. Beyond Java[ Go to top ]

    No, it is the emergence of Spring and Hibernate that show Java's maturity. There is NOTHING in Java language that is a shortcoming that is addressed Spring or Hibernate. I would say that the strength and vitality of Java is demostrated by the fact that it can successfully be all things to all people something Ruby or ROR hasn't proven.By comparision Ruby is narrow and IMO restricted. And if the day ever comes with it can match Java's breadth of technologies, I suspect that it will look every bit as complex as Java does today. It is easy to look simple when a given tool doesn't do as much. That's why I've never bought into the arguements about why PCs cannot be as simple as toasters. A PC is more complex and can do more.Again, many of these arguments are the same that were used for every scripting language that challenged the static accepted brother and as we've seen time and time again, the only thing that replaced a static langugage as another static language.

    That's a very good point. Over the last 20 years, my primary languages (because it has been the primary language of my employers) have been C, C++ and Java. I actually still use each of these. Most of what I do doesn't involve web applications - maybe I write one a year and spend half a day doing it. The domain of problems solved by these languages is much(much) larger than what is currently envisioned by some of the so-called dynamic languages.

    There is definitely a place for languages like Ruby - whether they will evolve into the "Next Big Thing" depends on the breadth of problems they solve.

    Cheers
    Ray
  10. Beyond Java[ Go to top ]

    You're not too familiar with RoR are you?

    Yes I am.
    It uses an O/R layer, similar in some ways to Hibernate, to hide DB specific code/queries from the application developer.

    No, it doesn't. You use DB-specific queries. The language you write anything but the simplest queries in is the native SQL of the database; something much of the Java community has started to move away from years ago.

    This is why we use HSQL, JDOQL and EJBQL.
    In addition, its "out of the box" functionality make it so that a schema change is automatically reflected in the UI, with no code change at all. So if you add a column to table and have a web page that lists entries from that table or one that shows details about a row of data, this pages will show the new column without any code being written, or even any configuration files changed.

    Assuming this is correct (and I am not convinced - as I understand it scaffolding is a once-only process), this completely impractical for any serious web design. No-one who writes a professional looking web pages would want arbitrary new fields and controls appearing on their pages simply because someone changed the schema!
    But you must recognize that for somebody new to Java, these things can look like voodoo.

    And

    @@name

    to access class variables is simple? Sure looks like obscure voodoo to me.
    As Spring, Hibernate and other technologies start to move into the mainstream of Java use, and as the latest release of Java (5.0), which has more changes than any release of Java for years, moves into mass use, saying Java 'is showing signs of wear' seems a bit strange.
    Actually it's the emergence of projects like Spring and Hibernate that showcase Java's age the most. If you wanted to teach newbie programmer to write a simple web application, would you just tell him to learn JDBC and JSPs? Or would you tell them to learn JDBC and JSP, then JSTL, then JSF, and then EJBs? Or would you tell them to learn JDBC and JSP, then Struts, then Hibernate, and maybe then also Spring? Struts, Hibernate, and Spring all came about because of the shortcomings of "standard" Java technologies. It winds up being a huge stack of technologies to learn and you really see the age of things as you see how these incremental technologies cover up the wrinkles of the standard technologies they were built on and complement.

    No, it is because good (not quick and dirty) web application IS difficult and full of wrinkles.

    The fact that Struts, Hibernate and Spring have arisen shows that Java is flexible and is not pinned down to only the standards. It shows that new approaches and innovations can become mainstream in Java.
  11. Coding time is not the main factor in choosing a language (a platform should I say at least for Java).

    Given the script features going on with Java 6, Groovy and such, clearly, "beyond Java" includes scripting, groovy on rails or whatever you like as a developper, but it does includes "Java" as a platform - not Ruby.
  12. Making Java more Dynamic[ Go to top ]

    Java needs to maintain a balance between being stable enough to be used in a conservative corporate environment while evolving rapidly enough to keep up with dynamic languages like Ruby and Python.

    I've never used Ruby, but I've used Python a lot. Python is great for short programs, but I wouldn't want to use it for anything more than a couple thousand lines or on a project hat involved several people. The bigger the program gets, the harder it is to live without static checking.

    However, the interesting thing about Python programs is that if you are smart they tend to shrink as much as they grow. If you notice something repetive occuring in your attributes, you write a special descriptor to handle it. A pattern in the form of your classes? Write a new metaclass that applies the pattern. So code grows as you implement functionality, and shrinks as you refactor common behavior into metaobjects.

    From a practical standpoint, you could conceive of this as compile-time programming, because (at least in my experience) the metaprogramming code is all done before the application code starts executing.
  13. I Hate JavaBeans[ Go to top ]

    Look at the (very rough) code below and start thinking about how many LOCs could be saved by introducing the illustrated "attribute" language feature, especially for more sophistacted usages...
    public class Foo {
    private String msg = "Hello, ";
    private String name = "World";
    public String getMsg() { return msg; }
    public void setMsg(String inMsg) {
    if (inMsg == null) throw new IllegalArgumentException("msg cannot be null!");
    if (inMsg.equals("")) throw new IllegalArgumentException("msg must not be blank!");
    msg = inMsg;
    }
    public String getName() { return name; }
    public void setName(String inName) {
    if (inName == null) throw new IllegalArgumentException("name cannot be null!");
    if (inName.equals("")) throw new IllegalArgumentException("name must not be blank!");
    name = inName;
    }
    public void sayHello() { System.out.println(msg + name + "!"); }
    }

    with a new java construct..
    public abstract attribute Attribute<T> {
    public instance T get(); // invoked when accessed via syntax obj.attrName
    public instance void set(T value); // invoked via syntax obj.attrName = newValue;
    }

    and...
    public attribute NonBlankString implements Attribute<String> {
    // this constructor is execucted at compile time or maybe class load time
    public NonBlankString(String defValue, String nullMessage, String blankMessage)
    {
    if (defValue == null) throw new IllegalArgumentException("defValue cannot be null");
    if (nullMessage == null) throw new IllegalArgumentException("nullMessage cannot be null");
    if (blankMessage == null) throw new IllegalArgumentException("blankMessage cannot be null");
    defVal = defValue;
    nullMsg = nullMessage;
    blankMsg = blankMessage;
    }
    public instance String get() { if (value == null) return defVal; return value; }
    public instance void setValue(String inVal) {
    if (inVal == null) throw new IllegalArgumentException(nullMsg);
    if (inVal.equals("")) throw new IllegalArgumentException(blankMsg);
    value = inVal;
    }
    private instance String value;
    private String defVal;
    private String nullMsg;
    private String blankMsg;
    }

    and finally
    public class Foo {
    // introduce the arrow operator for compile-time assignment
    public String name <- new NonBlankString("World", "name cannot be null", "name cannot be blank");
    public String msg <- new NonBlankString("Hello, ", "msg cannot be null", "msg cannot be blank");
    public void sayHello() { System.out.println(msg + name + "!"); }
    }
  14. I Hate JavaBeans[ Go to top ]

    I fail to see how fewer lines of code make it easier for someone else to maintain the code once the original author has moved on to some other jobs. I'm getting the feeling that many programmers don't get why business like Java or having something that is verbose with well defined practices. business want to reduce risk, that often manifests itself in the form of development process, best practices and lots of documentation.

    since I've done quite a bit of contract/consulting work the last 8 years, my experience is that languages that encourage sparse one line style coding has a higher tendency of being difficult to maintain. C# improved on Java by providing properties, but it's a mix bag. If there's only one person writing an application, then who cares about LOC. In contrast, a medium, large or extremely large project it's inappropriate to put LOC first. Clean, well implemented and documented code is far more important in medium to large projects.

    just look at .NET 1.0 XSD.exe. the classes it generated was very terse and used public fields. In .NET 2.0, they switched to make the fields private and used properties. LOC isn't an argument for or against Ruby or ROR in my mind. I would say LOC is only valid for a 1-man project. my bias opinion.

    peter
  15. Ok, not really about LOCs[ Go to top ]

    It's about avoiding duplication. It's about grouping related related pieces of information together (which .NET attributes somewhat achieve for properties).

    But most of all, in my opinion, it's about making the code better reflect the intent of the programmer rather than the mechanics of the program. It's about being able to explicitly code a pattern with some parameters, and then use it over and over, rather than thinking about the pattern, and then coding it over and over again.

    I'm not sure how much sense I'm making. There's some Python code I've written in the past that I wish I could post, that I think really show how good use of metaprogramming can drastically change the balance between imperative and declaritive code.

    I'm really not sure how to explain it. I'm a huge believer in static code checking (actually in fail-fast behavior, with compile-time being ideal), which frequently makes Python drive me crazy. But once you really really get into metaprogramming, it feels like another plane of existence. I really want to see a language that effectively combines the two, and I think a JVM based language could do it (but I'm not sure about a Java derivitive).
  16. Ok, not really about LOCs[ Go to top ]

    It's about avoiding duplication. It's about grouping related related pieces of information together (which .NET attributes somewhat achieve for properties).But most of all, in my opinion, it's about making the code better reflect the intent of the programmer rather than the mechanics of the program. It's about being able to explicitly code a pattern with some parameters, and then use it over and over, rather than thinking about the pattern, and then coding it over and over again.I'm not sure how much sense I'm making. There's some Python code I've written in the past that I wish I could post, that I think really show how good use of metaprogramming can drastically change the balance between imperative and declaritive code.I'm really not sure how to explain it. I'm a huge believer in static code checking (actually in fail-fast behavior, with compile-time being ideal), which frequently makes Python drive me crazy. But once you really really get into metaprogramming, it feels like another plane of existence. I really want to see a language that effectively combines the two, and I think a JVM based language could do it (but I'm not sure about a Java derivitive).

    I agree with that. Having the language more accurately express the functionality and intent of the application is a noble goal, but I seriously doubt it is achievable. I hope I'm wrong though. The reason I think it's impossible is based on the fact that every programmer is different. What I consider to be clear, will most likely appear like a pile junk to someone else. Some people love LISP, while others hate it. I'm bias, but my gut tells me these debates will never be resolved. languages be it programming or spoken language aren't inherently better, they're just different. same is true of ROR.

    peter
  17. Ok, not really about LOCs[ Go to top ]

    It's about avoiding duplication. It's about grouping related related pieces of information together (which .NET attributes somewhat achieve for properties).But most of all, in my opinion, it's about making the code better reflect the intent of the programmer rather than the mechanics of the program. It's about being able to explicitly code a pattern with some parameters, and then use it over and over, rather than thinking about the pattern, and then coding it over and over again.I'm not sure how much sense I'm making. There's some Python code I've written in the past that I wish I could post, that I think really show how good use of metaprogramming can drastically change the balance between imperative and declaritive code.I'm really not sure how to explain it. I'm a huge believer in static code checking (actually in fail-fast behavior, with compile-time being ideal), which frequently makes Python drive me crazy. But once you really really get into metaprogramming, it feels like another plane of existence. I really want to see a language that effectively combines the two, and I think a JVM based language could do it (but I'm not sure about a Java derivitive).

    Exactly. Rails may just be a stepping stone, or it may just fill a niche. The real meat is behind metaprogramming. How do you take a building block, and integrate it into the very core of the language? That way, your language grows with your domain. It feels like you are coding on a higher plane, becasue you *are* coding on a higher plane. Abstractions will inherently rise up over time.

    The point in Beyond Java is that this is just the sort of catalyst that could cause a new langauge to emerge. Keep in mind that with Java, Netscape and Applets were the catalysts, but they just exposed us to a better programming language that we now use in other ways.
  18. <quote>
    The real meat is behind metaprogramming. How do you take a building block, and integrate it into the very core of the language? That way, your language grows with your domain. It feels like you are coding on a higher plane, becasue you *are* coding on a higher plane. Abstractions will inherently rise up over time.
    </quote>

    ... therefore we have annotations in Java5
    ... therefore we are working towards MDA/MDD

    Cheers,
    Lofi.

    BTW. AndroMDA supports directly the creation of CRUD functions/screens from the UML models
  19. I would like to say same. We are using aproaches similar to MDA for 7 years now with great success. With new project we jumped to AndroMDA because of XMI/UML support for many UML tools. Again with great success. Please all of you which speak for Rails. Are you looking deep enough to Java:
    - MDA
    - Annotations
    - AOP
    - APT in Java 6.0

    I hope somebody at sun understand importancy of APT tool in Java 6. Together with Abstract Syntax Tree it will be great thing and allow translate following code:
    ############################
    class A {
    @NonNull String name;
    @NonNull @GetOnly Integer age;
    }
    ############################

    To:
    ############################
    class A {
    String name;

    public String getName() {
    return name;
    }

    Integer name;
    public void setName(String newName) {
    if (n==null) throw ...
    name=newName;
    }

    public Integer getAge() {
    return age;
    }

    }
    ############################
    Or multitier classes or anything you want.
  20. I would like to say same. We are using aproaches similar to MDA for 7 years now with great success. With new project we jumped to AndroMDA because of XMI/UML support for many UML tools. Again with great success...

    Great success? Success at achieving what exactly? The difficulty with these types of discussions are that people are often talking at different levels of abstraction and adressing different concerns.

    For example:
    concern: I want to produce system software so that I can abstract away the details of my hardware platfrom.

    If you drill down this concern to a lower level of abstraction you get:

    Concern: I need to utilise my computing resources as efficiently as possible whilst still remaining hardware agnostic.

    Now if this is your concern, then C is a pretty good solution. In fact the problem of hardware agnostic efficient programming has been largely solved by the C language.

    What is the biggest concern facing developers today? IMHO it is:

    I want to build software applications that address complex human centric problems.

    decomposed:

    I want to program at an higher level of abstraction.

    Now unless you can execute UML directly, I just don't see how MDA helps. UML is a graphical notation for describing object orientated programming. Much of the power of UML is that it doesn't need to execute. Hence UML can be used to express partial ideas. Trying to make UML executable reduces much of its power. This is the reason why I have little interest in UML 2.0.

    I can see MDA being a great success if your goal is to produce UML, but if your goal is executable code why do MDA?

    Comming back to the concern:program at an higher level of abstraction, then pure Object orientated programming languages like Smaltalk and Self are really the state of the art. IMHO, nothing in the last 30 years as come along that tackles this concern any better.
  21. <quote>
    Now unless you can execute UML directly, I just don't see how MDA helps. UML is a graphical notation for describing object orientated programming. Much of the power of UML is that it doesn't need to execute. Hence UML can be used to express partial ideas. Trying to make UML executable reduces much of its power. This is the reason why I have little interest in UML 2.0.
    I can see MDA being a great success if your goal is to produce UML, but if your goal is executable code why do MDA?
    </quote>

    you have 2 types of MDA:
    1) Elaborationist: using general programming language like Java to define your actions and
    2) Translationist (Executable UML): using Action Semantics to definine your actions. No need to use other general programming languages.

    If I'm not wrong you don't like the idea of Elaborationist MDA (E-MDA)? So let's talk about E-MDA first ;-) There are a lot of advantages using E-MDA, some of them:
    1) Documentation: do you think it is easier to read a bunch of Java codes instead of seeing a UML diagram? OK, you can generate the UML diagrams (at least the class diagrams) from your source codes, but:

    2) Don't forget if you are doing top-down development with E-MDA you can do a model-model transformation (MM-Transformation) and model-code/model-text transformation (MC/MT-Transformation), so that you can "abstract" your model, separate your "concerns" and create your own domain specific language (DSL).

    Example:
    You and I agree on a thing in "Our World" that a "Person" has *always* two attributes: "name" and "age". It's possible for us just to declare:

    Class: Person
    Stereotype: <Attributes: address, city

    without mentioning that this Person has also a name and an age (implicit inheritance).

    Using DSL and transformations you can also separate the concerns like adding a technical description to the Person.

    Example:
    All <
    3) Integration: today there are a lot of different language dialects (almost all of them are in XML) like BPEL, Struts configs, EJB deployment descriptors, etc. You can integrate *all of them* in UML and do the MM-, MC- and MT-Transformation to generate the results. Don't you think that this won't make your life easier as a developer? Draw your activity diagram, mark it with the given DSL, generate BPEL, Struts configs or what so ever...

    There are a lot of advantages using E-MDA... Just check some MDA books available.

    Now to T-MDA: surely this would be the best way so you don't need Java anymore ;-) At least for some domains this is already possible. In Open Source area there is still *no* way to do completely T-MDA for the *business domain applications*. But I could imagine that this will happen someday ;-) Why should I take care of all the technical details if I just want to write *a business web application*? Action Semantics can make this happens, because it works for and (hopefully) *only for* UML. We don't need any "technical" language elements in this business language. Just comparable to SQL for DB. Or do you think that we need to be able to handle all those HTTP sessions, creating/editing/removing all the DB tables (technical stuffs) *explicitly* in our business applications? Nope. What I want is that I don't want to take care about all of these stuffs at all.

    If you see this pattern it is actually just the same as what Java has done with JVM, GC, etc. and the development of programming languages so far...

    <quote>
    Comming back to the concern:program at an higher level of abstraction, then pure Object orientated programming languages like Smaltalk
    </quote>

    Nothing against Smalltalk but OO programming language itself won't solve all the problems, we still need some additional techniques:
    - Using models and adding correct DSLs to decribe your software can help you to *concentrate* on your domain understanding.
    - Using AOP can help to manage the concerns.
    - Business rules to separate them from the application.
    - SQL for accessing the databases.
    - etc...

    and MDA (E-MDA is the first way and T-MDA will be the next way to go) just *integrates* all of these techniques for us... :-)

    Cheers,
    Lofi.
  22. Upps the stereotypes did not show correctly... Here again:

    <quote>
    Now unless you can execute UML directly, I just don't see how MDA helps. UML is a graphical notation for describing object orientated programming. Much of the power of UML is that it doesn't need to execute. Hence UML can be used to express partial ideas. Trying to make UML executable reduces much of its power. This is the reason why I have little interest in UML 2.0.
    I can see MDA being a great success if your goal is to produce UML, but if your goal is executable code why do MDA?
    </quote>

    you have 2 types of MDA:
    1) Elaborationist: using general programming language like Java to define your actions and
    2) Translationist (Executable UML): using Action Semantics to definine your actions. No need to use other general programming languages.

    If I'm not wrong you don't like the idea of Elaborationist MDA (E-MDA)? So let's talk about E-MDA first ;-) There are a lot of advantages using E-MDA, some of them:
    1) Documentation: do you think it is easier to read a bunch of Java codes instead of seeing a UML diagram? OK, you can generate the UML diagrams (at least the class diagrams) from your source codes, but:

    2) Don't forget if you are doing top-down development with E-MDA you can do a model-model transformation (MM-Transformation) and model-code/model-text transformation (MC/MT-Transformation), so that you can "abstract" your model, separate your "concerns" and create your own domain specific language (DSL).

    Example:
    You and I agree on a thing in "Our World" that a "Person" has *always* two attributes: "name" and "age". It's possible for us just to declare:

    Class: Person
    Stereotype: <Our World>
    Attributes: address, city

    without mentioning that this Person has also a name and an age (implicit inheritance).

    Using DSL and transformations you can also separate the concerns like adding a technical description to the Person.

    Example:
    All <Our World> entities should have an "id" which can be used to save a primary key in the database. This "id" is pure technically in this case and should *not* be added in your conceptual model.

    3) Integration: today there are a lot of different language dialects (almost all of them are in XML) like BPEL, Struts configs, EJB deployment descriptors, etc. You can integrate *all of them* in UML and do the MM-, MC- and MT-Transformation to generate the results. Don't you think that this won't make your life easier as a developer? Draw your activity diagram, mark it with the given DSL, generate BPEL, Struts configs or what so ever...

    There are a lot of advantages using E-MDA... Just check some MDA books available.

    Now to T-MDA: surely this would be the best way so you don't need Java anymore ;-) At least for some domains this is already possible. In Open Source area there is still *no* way to do completely T-MDA for the *business domain applications*. But I could imagine that this will happen someday ;-) Why should I take care of all the technical details if I just want to write *a business web application*? Action Semantics can make this happens, because it works for and (hopefully) *only for* UML. We don't need any "technical" language elements in this business language. Just comparable to SQL for DB. Or do you think that we need to be able to handle all those HTTP sessions, creating/editing/removing all the DB tables (technical stuffs) *explicitly* in our business applications? Nope. What I want is that I don't want to take care about all of these stuffs at all.

    If you see this pattern it is actually just the same as what Java has done with JVM, GC, etc. and the development of programming languages so far...

    <quote>
    Comming back to the concern:program at an higher level of abstraction, then pure Object orientated programming languages like Smaltalk
    </quote>

    Nothing against Smalltalk but OO programming language itself won't solve all the problems, we still need some additional techniques:
    - Using models and adding correct DSLs to decribe your software can help you to *concentrate* on your domain understanding.
    - Using AOP can help to manage the concerns.
    - Business rules to separate them from the application.
    - SQL for accessing the databases.
    - etc...

    and MDA (E-MDA is the first way and T-MDA will be the next way to go) just integrates all these techniques for us...

    Cheers,
    Lofi.
  23. Thanks for the description. I don't want to knock what I may not understand, but I think I've got some experience with this stuff:
    1) Elaborationist: using general programming language like Java to define your actions and 2) Translationist (Executable UML): using Action Semantics to definine your actions.

    If I understand you correctly then 1) is "coding" inside your CASE tool and 2) is code generation from your CASE tool. I've done both. There was a time in the early 90s when this stuff was pretty popular.

    The problem was making chnages took forever. Your design, code test cycle required a model update, code generation, test generation, test. So people just changed the code and got the tool to "update" the model later. The CASE tools supported this through Round-trip-Engineering, but it never really worked and CASE tools fell out of vogue.
    2) Don't forget if you are doing top-down development with E-MDA...

    I think this is the basic difference. Approaches like MDA focus too much I think on the process of code production rather than on the quality of the resultant code itself (IMHO). I don't do top down development. I do top-down, bottom-up, center out, edges-in, etc. all at the same time, all within my IDE. Basically anything that gets me "clean working code". I know that the code works because it passes all its tests. If it turns out that my code doesn't do what it should, then I add a test describing the required behaviour and get my code to pass it. My only artifacts are working code and tests that describe what the code is doing.

    Now I do have a place for models. It can be difficult to define what it is my code needs to do (precisely). So I create a UML model to help me. I model the abstractions and their interactions. From this I identify the tests needed to ensure that each component behaves as expected. Then I throw my model away.

    Yes throw it away. Why do I do this? I do this because I know that my model is wrong. It is wrong because it hasn't been tested and will need to change the moment I get concrete feedback from my tests and my users.

    I write a test first then implements the code that satifies the test after. If I think of a new test, then I just write it. If I realise that a test is flawed, then I just change it. As my understanding improves my design (tests, code) and implementation (code) are changed too.

    The advantage? Well I can execute my tests. If my design is flawed, then I should be able to identify a test that proves it. By running the tests I get concrete feedback on the correctness of my design. If I want to improve the organisation of my code, then I go ahead and refactor. As long as the tests still pass then I know that my design improvements are safe.

    This points to another difference. The code IS my design. UML is not my design, unless ofcourse I use my CASE tool as a programming environment, in which case "UML+actions" is my code and my design (but who would want to do this? I've done it and its terrible). So the only advantage to having lots of UML lying around is documentation.

    If I really do need documentation, then I can generate that from my code the once, at the end. The best tool I know for this is TogetherJ.

    This is an extreme way of thinking I know, I guess that's why they call it Extreme Programming. It's not perfect, but having tried both approaches, I tend to stick as close to the XP approach as I can.

    BTW can you explain what a DSL is? Every time I create a new class and add it to my code base, am I not creating a Domain Specific Language? My new abstraction is specific to my application domain. Is DSL another way of saying: "use a langauage that has the right libraries"?
  24. Approaches like MDA focus too much I think on the process of code production rather than on the quality of the resultant code itself (IMHO).
    If by "quality" you mean runtime optimality, then please note that Moore's Law and Amdahl's Law devalue runtime. Whereas developers seem always to get more costly. Ie, trends favor MDA. Or maybe by "quality" you mean absence of deliverable defects, something code generation achieves with superhuman precision. One of the reasons we use javac to generate bytecode is since hand bytecoding is so error prone. In this sense quality and the elimination of human work tend to be linked.
  25. I'm sorry, but this title is beginning to annoy me (437 message later).

    Metaprogramming provides all things that MDA lacks that make good programmers hate MDA.

    MDA provides all the things the metaprogramming lacks to make it comphrendable run-of-the-mill programmers.
  26. MDA provides all the things the metaprogramming lacks to make it comphrendable run-of-the-mill programmers.
    Have you noticed an evolutionary trend toward programming in a padded cell: assembly -> C -> C++ -> Java -> Ruby, etc? If bare metal made any economic sense, we'd still be Forth hackers. Did you study any manufacturing in school, like mass production and economy of scale?

    The descendants of Algol (C, Java, etc) have shined, but they're not the last evolution of syntax. Earlier in this thread, refactoring was cited as a case for Java. Already experimental languages have emerged with refactoring in mind: o:XML and Subtext. Subtext's inventor said this: "My long-term goal is to decriminalize copy & paste. It's a natural way we want to program, so let's embrace it in our programming languages." In the future, every programming language has an Eclipse plugin, even UML2/MDA would have a plugin.
  27. Hi Brian,

    thanks for the links. Very impressive! I agree with you, it's just a matter of time. As usual, mankind will always search for better solutions :-) It depends on us whether we want to "ride" the "first mover effekt" which always has a big risk.

    Cheers,
    Lofi.
  28. Have you noticed an evolutionary trend toward programming in a padded cell: assembly -> C -> C++ -> Java -> Ruby, etc? If bare metal made any economic sense, we'd still be Forth hackers.

    Yes, I have noticed that trend. And let me say the engineer in me absolutely hates it, but the project manager in me fully understand the importance.

    However, I don't like languages like Ruby are a padded cell. They're more like a room with a closed door at one end of the room and a mirror at the other end.

    The closed door can be unlocked by writing C/C++ code and wrapping it for the target language. You can step through the mirror into the land of metaprogramming, but then you're going through the looking glass, and all the laws of the world you knew before get blurry.

    MDA and similar technologies like RoR are like having someone pop halfway out of the mirror, handing you a box with a crank, and telling you to turn the crank and it will magically produce working code. The box is great, but the problem is once you try to go beyond what it supports, you are lost.

    MDA is kind of like putting a grate over the mirror, and that grate is the modelling language. Metaprogramming has no such grate, because the metaprogramming there really is not distinction between "meta-code" and "code."
    Did you study any manufacturing in school, like mass production and economy of scale?

    Of course, everyone does. And now I work for what's often referred to as a "low volume, high value, high mix" manufacturer. Have you heard of Lean Manufacturing? Mass production was great in it's day, but the sun is setting. Products in even the highest volume industries are becoming more and more driven by customer needs.

    Software should be ahead of the curve. Building a custom car for each customer should be about a billion times harder than building custom software, because they have annoying things like physics and supplychains to handle. Yet while auto companies step closer and closer to it every day, while reducing per-unit manufacturing costs, while the leaders in the software industry proclaim that custom software is dead and everyone should use on demand hosted solutions.

    Programmers don't need cookie cutters to help them mass produce software, or the ones that do probably should sell insurance or something like that. Programmers need to be able to smoothly glide between levels of abstractions and across concerns. Architectural boundaries should be smooth, locial gradients, not great edifices of XML.

    Ok, I'll get off my soap box now. I really have nothing against MDA, and think it's a good thing. I just don't think it scales well with increasing complexity, and what we need is a development technology that scales well with increasing complexity. To go back to manufacturing speak, MDA capitalizes development costs. But once you capitalize something, you almost immediately decide you want to variablize it again, while continueing down the cost curve.
  29. Have you heard of Lean Manufacturing?
    I was taught kanban, a 25 year-old Japanese way. Isn't that lean? And we mostly drive Hondas and Toyotas now.
    Mass production was great in it's day, but the sun is setting. Products in even the highest volume industries are becoming more and more driven by customer needs.Software should be ahead of the curve.
    You're unwittingly making the case for MDA. If every customer wants applications custom built for him, then manual development becomes even less feasible than it is today. Automated production is a timeless response to high demand. Model compilation was invented by Shlaer/Mellor. One of Steve Mellor's favorite anecdotes is telephone switching. Originally a human operator had to manually cross connect two telephones for a call. As demand grew, it was predicted that every adult and child would eventually have to be an operator to meet demand. Now circuit switching is automatic, which let the market grow phenomenally. Software development isn't much different. So much of programing is mindless grunt work for us code monkeys. That kind of work deserves obselescence.
    Programmers don't need cookie cutters to help them mass produce software...
    Actually they do, to keep up with rising demand and quality expectations.
  30. So much of programing is mindless grunt work for us code monkeys. That kind of work deserves obselescence.

    I can't help but think that you're confusing representation with abstraction. If I represent a class using UML is that class at an higher level of abstraction then if I use Java? The answer is no. The abstraction is the same, but the representation has changed. You are correct all programming languages (except assembler) require translation. But from Java class to byte code the translation is one-to-many, but from UML class to Java class the translation is one-to-one, no advantage.

    So how can we devise high level abstractions? By building on low level ones - this is what libraries are surely? Some languages allow better abstractions to be built that are more flexible and reusable. The property that comes to mind is late binding (late decison making). So abstractions become what you need, when you need it, making them eaiser to re-use in new and innovative ways that the original authors never dreamt of.

    The ability that a Ruby object has to change behaviour (class) at run time is a good example of how late binding opens up the possibility for powerful "meta-programming". Merely changing my Java program representation to pictures doesn't give me this.

    And why is drawing pictures more productive and less grunt work then typing? I for one can type faster then I can draw.
  31. But from Java class to byte code the translation is one-to-many, but from UML class to Java class the translation is one-to-one, no advantage.
    You seem unfamiliar with UML's stereotypes and profiles, which allow for translation beyond 1-1. Andro's a good example.
    The ability that a Ruby object has to change behaviour (class) at run time is a good example of how late binding opens up the possibility for powerful "meta-programming".
    Sure. Weren't you the one insisting that UML is less readable? Now you're touting Ruby's (and JavaScript's) ability for instances to mutate beyond their original definition.
    And why is drawing pictures more productive and less grunt work then typing? I for one can type faster then I can draw.
    XMI proves the dichotomy between text and pictures is false. Eclipse's many subpanels proves that a syntax-colored text file is an insufficient presentation for maximum developer productivity. Why does C have a '->' operator? It's a graphical aproximation with intuitive significance. Why is structured logic indented? Why do developers use carriage returns when the compiler ignores them? Visuals matter -- a lot.
  32. XMI proves the dichotomy between text and pictures is false. Eclipse's many subpanels proves that a syntax-colored text file is an insufficient presentation for maximum developer productivity.

    You sound very confident in your assertions. And I must admit that I am at a disadvantage when it comes to knowledge of the latest MDA tools. I still remain to be convinced, but I am open minded. Can you suggest a good paper that I could read on the subject?

    I do agree that a graphical programming environment does help. Eclipse has shown this, and advanced languages like Self have built in features that are designed to be operated graphically (drag and drop).

    I do not know what XMI is, I do know what a stereotype is though. Any useful reference appreciated.
  33. <quote>
    Any useful reference appreciated.
    </quote>

    There are some articles and discussions in TSS just search for "mda".

    Following books are important to understand the value of MDA (E-MDA):
    1) Frankel, D. S., Model Driven Architecture, Wiley, 2003
    2) Kleppe, A., Warmer, J., Bast, W., MDA Explained, Addison-Wesley, 2003.

    Hope this helps!
    Lofi.
  34. <quote>
    I do not know what XMI is, I do know what a stereotype is though. Any useful reference appreciated.
    </quote>

    If you are interested more on T-MDA:
    1) Chris Raistrick et al., MDA with Executable UML.
    2) Stephen J. Mellor et al., Excutable UML.

    This book for both types (E-MDA and T-MDA):
    1) Stephen J. Mellor et al., MDA Distilled

    Cheers,
    Lofi.
  35. <quote>
    I can't help but think that you're confusing representation with abstraction. If I represent a class using UML is that class at an higher level of abstraction then if I use Java? The answer is no. The abstraction is the same, but the representation has changed. You are correct all programming languages (except assembler) require translation. But from Java class to byte code the translation is one-to-many, but from UML class to Java class the translation is one-to-one, no advantage.
    </quote>

    This is simply not true like Brian said. You can have one-to-many during the transformations. And you can have 2 ways to achieve this:
    - model-model and
    - model-text/model-code transformations.

    At the moment the emphasis is on the model-text/model-code transformations since at the end you want to have Java source codes, XML files, DB script files, configuration files, Ant files, etc. But model-model transformations will become very important soon since using these you can support productline and make your model more higher abstract than before. See my example in above thread...

    AndroMDA team works on an integration of ATL (Eclipse project) and I'm using MTL (BasicMTL, INRIA project) - which I really like since it has the known OO concepts thus very easy to use - to achieve the model-model transformations.

    At the end we'll hopefully integrate all the transformation languages available - which should be conform to the QVT spec. - to be able to "chain" the transformation processes. Since I believe in "using the right language (also DSL) for the right purpose".

    Remember, writing a transformation definition is just similar to writing a general computer program. This is applied to AndroMDA (model-text/model-code) and MTL/ATL (model-model).

    Cheers,
    Lofi.
  36. You're unwittingly making the case for MDA.

    Not really. I'm making the case that MDA is handicapped compared to true metaprogramming.
    Programmers don't need cookie cutters to help them mass produce software...
    Actually they do, to keep up with rising demand and quality expectations.

    No, they don't. In order for cookie cutters to be effective, the underlying concern has to be really consistent across applications. If the concern is that consistent, why can't it be packaged as a library or framework that integrates cleanly with the core language, rather than as yet another tool suite or a library or framework that requires piles of XML.
    So much of programing is mindless grunt work for us code monkeys. That kind of work deserves obselescence.

    We agree that this is the problem. But what is the cause? In my opinion, the cause is libraries and frameworks that don't cleanly integrate with the core language, leading to lots of boilerplate code and configurations. Which leads to the question: Why don't libraries and frameworks cleanly intergrate with the core language?

    I think there are two causes:
    1. Designing such software is really hard
    2. Java was intentionally handicapped because many of the features that lead to cleaner integration with the language, such as operator overloading, multiple inheritance, templates, and metaprogramming; were left out of Java.

    So is the solution to layer even more tools on top of our existing tools, or is the solution to fix the tool at the center of it all: Java.
  37. If the concern is that consistent, why can't it be packaged as a library or framework that integrates cleanly with the core language...
    Your phrase "the core language" is curious. For a UML project Andro emits Java, DHTML(CSS/JavaScript/HTML), XML Schema, WSDL, and XML descriptors for things like Hibernate. And if you don't like what's generated, simply tweek the templates and regenerate. It's a fine example of metaprogramming, which you seem to like. Now what if all those language-diverse artifacts were all encapsulated in a shrink-wrapped Java library, as you suggest, and you want to tweek some corresponding XML Schema hard coded in that frozen library. How?
    In my opinion, the cause is libraries and frameworks that don't cleanly integrate with the core language, leading to lots of boilerplate code and configurations.
    First you said libraries are the answer, and now you blame them. As for boilerplate and configuration, the shrink-wrapped library's developer can't predict the possible customizations that a customer might want to make to the boilerplate or configuration. Whereas generative templates can be readily hacked. It's a design goal for the reuse of the generator.
  38. For a UML project Andro emits Java, DHTML(CSS/JavaScript/HTML), XML Schema, WSDL, and XML descriptors for things like Hibernate.

    You forgot the database schema...which I guess Hibernate would generate for you, not AndroMDA.

    1. It's too many artifacts. Generating them is a million times better than manually creating them, but removing them altogether would be even better.

    2. How does MDA help the programmer move closer to the problem domain? I understand how it gives the programmer more time to focus on the problem domain, but how does it help the programmer build abstractions that aren't focused on things like databases and forms and transactions and such other technical concerns that the user could care less about?

    3. Can MDA templates validate that they are being correctly applied?

    4. Can MDA templates interact with one another?

    5. Can I write MDA templates for MDA templates for templates???

    6. Can MDA templates be applied to developing computational models?

    7. Can MDA templates adapt themselves to changing contexts?
    First you said libraries are the answer, and now you blame them.

    Libraries have problems, but let's not throw out the baby with the bathwater.
    Whereas generative templates can be readily hacked. It's a design goal for the reuse of the generator.

    So we can cut and paste in our generative templates instead of cut and pasting in our code and configuration files. Again, not bad, but it could be better.

    Metaprogramming can combine the best of MDA, AOP, static code analysis, DSLs, and countless other things I can't think of right now.

    But metaprogramming is not new, and it has yet to catch on. It may never catch on. But I love it, and hope it does.
  39. How does MDA help the programmer move closer to the problem domain?
    Imagine UML's many types of behavioral diagrams were animated, maybe interactive for single stepping. Especially with activity diagram, I could see the actors as they rendevous. I wouldn't have to think about JMS or RPC. That kind of model insight helps diagnose a concurrency bug sooner, a bug which might be due to some inherent interaction ambiguity in the domain's requirements. Without the domain model things get ugly, and debugging concurrency sadly involves manually chasing threads in Eclipse while stepping through Java source lines. Our work shouldn't be that tedious.
    Can MDA templates validate that they are being correctly applied? Can MDA templates interact with one another? Can I write MDA templates for MDA templates for templates?
    There's a commercial MDA backend that uses JSPs for it's generative templates. Since the answer to your questions is always yes for JSPs, it's also so for MDA. JSP originally proved its metaprogramming prowess at generating JavaScripted web pages.
    Libraries have problems, but let's not throw out the baby with the bathwater.
    MDA doesn't obviate or compete with libraries. But MDA hints that libraries won't always be hand coded.

    Anyway, I dig your identifying metaprogramming as a broad category of possibly synergistic frameworks (aspects,annotation,MDA,etc). Andro generates annotations (and might be enhanced for doing aspects), so in a metaprogrammed ecosystem, MDA is the top of the technology stack.

    Aspects, annotations, and MDA all tend to be used for generating code. This shortcoming isn't peculiar to MDA. But MDA doesn't mandate generation, and it should be possible that a model execution runtime (ie, a UML VM) be data-driven directly from XMI (XML encoded UML).
  40. <quote>
    You forgot the database schema...which I guess Hibernate would generate for you, not AndroMDA.
    </quote>

    which can easily be done directly from AndroMDA if you want to. The latest AndroMDA Hibernate cartridge does not use XDoclet anymore and generates the hbml files directly. So you have your choice here. I myself see this more pragmatic. Reuse whatever you can reuse, so that your template programming will be easier (chaining the reusable artefacts). Just build from the layer below to make everything easier.

    <quote>
    1. It's too many artifacts. Generating them is a million times better than manually creating them, but removing them altogether would be even better.
    </quote>

    Yes, sure, therefore we have two approaches E-MDA and T-MDA (see above thread). At the end it is the target to remove all the "generated files" and have everything automatically compiled for you (T-MDA). It's a mater of time.

    Anyway "generative programming" is not new and you can see that JDK 6.0 will also support this as *a core* in combination with annotations, check out APT:
    http://download.java.net/jdk6/docs/guide/apt/GettingStarted.html
    Here is a small blog of using APT with JSP template:
    http://www.jroller.com/page/robwilliams/20050508

    The concept of APT is already old in MDA approach ;-) So I hope you can see why I said MDA (top-down, driven by domain experts) and Java (bottom-up, driven by developers) may someday meet in the middle :-)

    <quote>
    Metaprogramming can combine the best of MDA, AOP, static code analysis, DSLs, and countless other things I can't think of right now.
    </quote>

    I see this different: Extended OOP (Metaprogramming, Annotations), Frameworks, AOP, Business Rules, etc. are a part of the integrative approach of MDA, since MDA *is* the top-down approach.

    Cheers,
    Lofi.
  41. Here is a small blog of using APT with JSP template:http://www.jroller.com/page/robwilliams/20050508
    That blog's conclusion confuses me. Can you explain this:
    Obviously, APT and JSP pages are mutually exclusive. Turns out, while I was doing this, I picked up a Dr. Dobbs and there was an article about doing codegen with or without templates. The XSLT approach may be the sane middle ground.
    Why are APT and JSP incompatible? Why is XSLT saner?
  42. <quote>
    That blog's conclusion confuses me. Can you explain this:
    Why are APT and JSP incompatible? Why is XSLT saner?
    </quote>

    I don't really agree with the content of that blog. I just wanted to show that APT in combination with template engines like JSP/Velocity/JET will come as a core in JDK 6.. :-)

    That guy maybe thought that APT gives you a template engine as well but this is not the case. So if you use APT you may also need JSP/Velocity/JET for the template engine. Surely you can make code generation without any template engines by creating (in his example he used hibernate hbml file, thus XML file) the XML file "on the fly" (programmatically) using JDOM... And if you need to transform to another format after that and again you may use XSLT :-) That was my understanding...

    Cheers,
    Lofi.
  43. So I hope you can see why I said MDA (top-down, driven by domain experts) and Java (bottom-up, driven by developers) may someday meet in the middle :-)<quote>Metaprogramming can combine the best of MDA, AOP, static code analysis, DSLs, and countless other things I can't think of right now.</quote>I see this different: Extended OOP (Metaprogramming, Annotations), Frameworks, AOP, Business Rules, etc. are a part of the integrative approach of MDA, since MDA *is* the top-down approach.Cheers,Lofi.

    Hi, I've done some research and I'm even more convinced that MDA will turn out to be a dead end. Others have made this point too - Why add complexity inorder to "simplify" an allready complex process? Why not just simplify the process at source? For example persistence need not require Hibernate, XML mappings etc. OODBMS in Smalltalk and RoR in Ruby "simplify" the persistence problem at source. Wizards have their down side. The moment you want to do something the Wizard designer didn't think of, you're stuck. In such cases its back to the code. Or fiddling with templates and generators.

    What we want to do is write applications and solve business problems. But look through your posts on MDA, their full of technology (TLAs). That should tell you something.

    The purpose of a programming language is to narrow the gap between humans and computers. IMO MDA attempts to do this by sweeping technical complexity under the carpet.

    The reason why I think MDA will eventually fail is that it is built on the premise that top-down development is preferable.

    If over engineeered, and inflexible designs are your goal then top down works. If you want to build light wieght, adaptable and flexible Systems then top down design alone is insufficient.

    For example, think of the scenario where an architect working at 10 00 feet decides that the design should be so. Then a coder working at the coal face, notices a re-occuring pattern and sees a simplification not visible at 10 000 feet that will massively reduce complexity across the system. If you only do MDA then this optimisation will be lost (not visible), and your maintenance engineers will inherit un needed complexity.

    How do you find the simplest thing that could possibly work? By not design everything up front, and building just enough to meet a single test. After pasing that test you're free to simplify your model, ensuring that it still passes all the existing tests. test-build-simplify iteratively, every few minutes.

    This approach signifcantly reduces code (model) bloat, because you are constantly getting feedback on:

    1. The completeness of your specification (tests)
    2. The correctness of your model
    3. Smells (duplication and complexity) in your model.

    Each one of these concerns can be tackled independently without needing to worry about the others. And you can change direction as you "discover" what is the simplest solution. Look up references to Test Driven Development on the web for a fuller explanation.

    So my point is keep things as simple as possible, but no simpler. Hiding complexity behind a code generator does no reduce complexity. The complexity is still there and will become visible again the next time your customer requests a change that you and your generator did not anticipate.
  44. So I hope you can see why I said MDA (top-down, driven by domain experts) and Java (bottom-up, driven by developers) may someday meet in the middle :-)<quote>Metaprogramming can combine the best of MDA, AOP, static code analysis, DSLs, and countless other things I can't think of right now.</quote>I see this different: Extended OOP (Metaprogramming, Annotations), Frameworks, AOP, Business Rules, etc. are a part of the integrative approach of MDA, since MDA *is* the top-down approach.Cheers,Lofi.
    Hi, I've done some research and I'm even more convinced that MDA will turn out to be a dead end. Others have made this point too - Why add complexity inorder to "simplify" an allready complex process? Why not just simplify the process at source? For example persistence need not require Hibernate, XML mappings etc. OODBMS in Smalltalk and RoR in Ruby "simplify" the persistence problem at source. Wizards have their down side. The moment you want to do something the Wizard designer didn't think of, you're stuck. In such cases its back to the code. Or fiddling with templates and generators.What we want to do is write applications and solve business problems. But look through your posts on MDA, their full of technology (TLAs). That should tell you something.The purpose of a programming language is to narrow the gap between humans and computers. IMO MDA attempts to do this by sweeping technical complexity under the carpet.The reason why I think MDA will eventually fail is that it is built on the premise that top-down development is preferable.If over engineeered, and inflexible designs are your goal then top down works. If you want to build light wieght, adaptable and flexible Systems then top down design alone is insufficient.For example, think of the scenario where an architect working at 10 00 feet decides that the design should be so. Then a coder working at the coal face, notices a re-occuring pattern and sees a simplification not visible at 10 000 feet that will massively reduce complexity across the system. If you only do MDA then this optimisation will be lost (not visible), and your maintenance engineers will inherit un needed complexity. How do you find the simplest thing that could possibly work? By not design everything up front, and building just enough to meet a single test. After pasing that test you're free to simplify your model, ensuring that it still passes all the existing tests. test-build-simplify iteratively, every few minutes.This approach signifcantly reduces code (model) bloat, because you are constantly getting feedback on:1. The completeness of your specification (tests)2. The correctness of your model3. Smells (duplication and complexity) in your model.Each one of these concerns can be tackled independently without needing to worry about the others. And you can change direction as you "discover" what is the simplest solution. Look up references to Test Driven Development on the web for a fuller explanation.So my point is keep things as simple as possible, but no simpler. Hiding complexity behind a code generator does no reduce complexity. The complexity is still there and will become visible again the next time your customer requests a change that you and your generator did not anticipate.

    +1. Well said. Relying on tools for simplicity is generally a bad idea. We've been trying to get visual programming right for over 15 years now, in projects that I've been directly involved in. It's far better to start on a cleaner foundation. 25 years of C-based syntax, a syntax invented for systems programming instead of applications programming, is enough.

    I think we're seeing so many new programming paradigms and new open source frameworks in Java-AOP, annotations, XML-binding, web-services, SOA, EJB 3, DI, MDA, seven or eight versions of MVC, Seam, Rife, and a whole lot of others, because the current language and frameworks are not getting it done.
  45. <quote>
    +1. Well said. Relying on tools for simplicity is generally a bad idea. We've been trying to get visual programming right for over 15 years now, in projects that I've been directly involved in. It's far better to start on a cleaner foundation. 25 years of C-based syntax, a syntax invented for systems programming instead of applications programming, is enough.
    </quote>

    I feel really sad to hear such a comment like this after doing a long discussion... What is visual programming has to do with MDA/MDD? :-(

    <quote>
    I think we're seeing so many new programming paradigms and new open source frameworks in Java-AOP, annotations, XML-binding, web-services, SOA, EJB 3, DI, MDA, seven or eight versions of MVC, Seam, Rife, and a whole lot of others, because the current language and frameworks are not getting it done.
    </quote>

    and it will never ever... It's just like a game "catch me if you can"... therefore we need an *integrative approach* -> MDA/MDD/Software Factories.

    Cheers,
    Lofi.
  46. For example, think of the scenario where an architect working at 10 00 feet decides that the design should be so. Then a coder working at the coal face, notices a re-occuring pattern and sees a simplification not visible at 10 000 feet that will massively reduce complexity across the system. If you only do MDA then this optimisation will be lost...
    The two engineering roles you give are of historical interest and absent from MDA. MDA has two engineering roles, but very different from the ones you suggest. An MDA architect maintains the reusable architecture, which has two parts (generative templates and runtime libraries). The MDA analyst produces domain models and avoids the implementation target language. So in MDA, it's the architect's responsibility to identify and apply refactorings. These refactorings have no impact on the analyst's models, and the analyst likely never examines the generated code.

    But there's a very important beneficial phenomenon that occurs with MDA that traditional projects miss. Back in the Shlaer/Mellor days (Mellor's the ongoing genius of MDA) that predated MDA, it was colloquially said that when an architecture (templates and runtime libraries) leaks, it leaks like a sieve. That's beneficial since failing fast and loudly is very helpful for detecting bugs. This phenomenon suggests to me that a reusable MDA architecture will approach zero defects faster than the development of traditional libraries and frameworks would.
    By not design everything up front, and building just enough to meet a single test.
    It's a myth that MDA can't do agile iterations. Obviously an analyst can tweek his domain model, regen, and retest as frequently as he wants. But another facet of MDA you missed is that the reusable architecture (templates and runtime libraries) would ideally be bought ready to use. Tweeks to reusable architecture then occur in agile iterations. One beauty of MDA is that the architectural iterations and the domain model iterations are decoupled and can cycle independently. Whereas in a traditional project an architectural upgrade from say one browser version to another might require all engineers to revisit their work. MDA architectures are supposed to be transparently pluggable, such that the domain models aren't disturbed if the a J2EE architecture is replaced with a .NET architecture. Or in MDA parlance, the platform independent model (PIM) can be transformed to alternative platform specific models (PSM).
  47. The two engineering roles you give are of historical interest and absent from MDA. MDA has two engineering roles, but very different from the ones you suggest. An MDA architect maintains the reusable architecture, which has two parts (generative templates and runtime libraries). The MDA analyst produces domain models and avoids the implementation target language.

    You are correct, the roles I mention are historical. The two roles you mention for MDA sound similar though: MDA Analyst, and MDA Architect.
     
    In Agile development we ALL do "Analysis" and we ALL do "Architecting" all the time. Intermingling and testing both with concrete feedback.
    Back in the Shlaer/Mellor days (Mellor's the ongoing genius of MDA) that predated MDA, it was colloquially said that when an architecture (templates and runtime libraries) leaks, it leaks like a sieve. That's beneficial since failing fast and loudly is very helpful for detecting bugs. This phenomenon suggests to me that a reusable MDA architecture will approach zero defects faster than the development of traditional libraries and frameworks would.

    I respect shlear/Mellor and his work - but he is very much top down. His OO method was very good at static modelling - when he moved on to "modelling the world in states" (dynamic behaviour) his method fell short, and no one I know ever applied his second book. Interestingly you "suggest" benefits for MDA. In software like everything else experience counts. Whether your suggestions prove to be correct or not will be born out in time and with experience.

    <blockquoute> Obviously an analyst can tweek his domain model, regen, and retest as frequently as he wants. But another facet of MDA you missed is that the reusable architecture (templates and runtime libraries) would ideally be bought ready to use. Tweeks to reusable architecture then occur in agile iterations. One beauty of MDA is that the architectural iterations and the domain model iterations are decoupled and can cycle independently.</blockquoute>

    Again, how we would like the world to be and how it actually is, are often different things. Thats why I say "the simplest thing possible but not simpler". If these two concerns can be decoupled in the way you describe then great, but is that how things really are? Look to nature, many complex natural structures have both a macro and a micro organisation. Often the two are related in complex, yet simple and elegant ways.

    My experience is that imposing a single generic view of the world rarely works in software. Software development IS complex and requires mutiple views (top-down, bottom-up, middle-out, edges-in), mutiple skills (Analysis, Architecting,...) and approaches. For me this is what makes Software Development a creative art rather than a science. In the end, the only definative guide is results (concrete imperical feedback).

    Ultimately time will tell, and I will watch the development of MDA with interest.
  48. MDA roles.[ Go to top ]

    The two roles you mention for MDA sound similar though: MDA Analyst, and MDA Architect.
    Those two roles are both developers, but they're very disimalar -- skills, focus, formalisms, social nature -- all different. The architect hand codes. The analyst doesn't. The analyst cares about knowledge transfer (from non-technical subject matter experts). The architect doesn't. The anylist can do Business Process Reengineering. The architect can't.

    MDA's splitting of development into two roles explicitly reaps the benefits of specialization (higher productivity, reduced labor bill, etc). A uniform crew of traditional code monkeys can't. A big benefit of this specialization is that for many development teams MDA can eliminate the role of resident architect. MDA presumes that architectures (templates and runtime libraries) are pluggable and can be bought shrinkwrapped and ready to use. Ie, a development team might make do without an architect. The team's developers can all be analysts focused on gathering domain knowledge and modeling it.
  49. MDA roles.[ Go to top ]

    The two roles you mention for MDA sound similar though: MDA Analyst, and MDA Architect.
    Those two roles are both developers, but they're very disimalar -- skills, focus, formalisms, social nature -- all different. The architect hand codes. The analyst doesn't. The analyst cares about knowledge transfer (from non-technical subject matter experts). The architect doesn't. The anylist can do Business Process Reengineering. The architect can't.MDA's splitting of development into two roles explicitly reaps the benefits of specialization (higher productivity, reduced labor bill, etc). A uniform crew of traditional code monkeys can't. A big benefit of this specialization is that for many development teams MDA can eliminate the role of resident architect. MDA presumes that architectures (templates and runtime libraries) are pluggable and can be bought shrinkwrapped and ready to use. Ie, a development team might make do without an architect. The team's developers can all be analysts focused on gathering domain knowledge and modeling it.

    Hi Brian,

    I've heard all this stuff before. Infact I did it for over 10 years. It doesn't work. Thats why so many people are moving to a more Agile development model. Business and technical issues are all inter-related. Customers are looking for value (the most useful features at the cheapest price and as soon as possible). To achieve value you need an appreciation of both the business goals and the technology.

    Gone are the days when a programmer spends ages trying to get something to work, just because it is in the spec for the Business Analyst, only to find out that that feature wasn't that important and a bunch of things that were a lot easier and could have delivered ten times more business value could/should have been built and delivered in the time.

    I would suggest that you take a look at methodologies like SCRUM or XP as they clearly describe how ludicrous this mindset is.

    I'll say it again Software developmet is complex Computers can only do very simple things. To do something "useful" means getting the computer to do a very large number of things in a predefined sequence. Hence the complexity. Tackling this inherent complexity is a creative mutli-skilled discipline.

    Good developers poses all the skills you describe and that is why they are much in demand.
  50. MDA roles.[ Go to top ]

    Business and technical issues are all inter-related.
    That seems like FUD to me. Eg, the architect of an office park needn't know which industry verticals the park's tenants will come from. The same can be true of software architecture. Eg, we're using Java for day trading. Sun's architects needn't know anything about day trading to produce a reusable and immensely valuable platform.
    I would suggest that you take a look at methodologies like SCRUM or XP as they clearly describe how ludicrous this mindset is.
    Sorry dude. I gotta call bullshit on your claim that Scrum and XP are against generative architecture. I love Scrum/XP since they're the best, and they're totally compatible with model translation. If you got proof to the contrary, phucking cite it. This kind of FUD gets me sore, nothing personal. Sorry.

    Anyway, Scrum/XP regard software development strategies, with an emphasis on manageability, especially minimizing risk. All of this applies to software development generally, even if development is limited to producing models. Like most methodologies, Scrum/XP is technology neutral. Like most technologies, MDA is methodology neutral. So Scrum/XP and MDA are compatible. They're a great fit.
  51. MDA roles.[ Go to top ]

    So Scrum/XP and MDA are compatible. They're a great fit.

    Again you've missed the point (blinded by your on zealousness). You were talking about roles. Both SCRUM and XP says get together a team and empower the team to do what ever is needed to get the job done. Both are iterative and imperical with constant feedback and communication. The roles you describe just aren't agile. When does the PIM designer get feedback on the "strain he is placing on the generator and the PSM? When does the PSM generator designer gain feedback on the type of optimisations needed to support a given PIM? How is the PIM designer empowered when his PSM generator fails?

    You state they are seperate roles (virtually seperate teams) no communication, no feedback. Some how the architect will come up with a generator fit for all problems - How?

    This is definately NOT Agile.
  52. MDA roles: PROCESS, METHODS, TOOLS[ Go to top ]

    <quote>
    You state they are seperate roles (virtually seperate teams) no communication, no feedback. Some how the architect will come up with a generator fit for all problems - How?
    This is definately NOT Agile.
    </quote>

    you are missing the point from Brian. In Software Engineering you have to distinguish between (taken from Pressman, Software Engineering):
    - Software PROCESS: XP, AGILE, RUP are software process. "Software process is the glue that holds the technology layers together and enables rational and timely development of computer software... The key process areas form the basis for management control of software projects, and establish the context in which technical methods are applied, work products (models, codes, docs, data, etc.) are produced, milestones are established, quality is ensured and change is properly managed" (taken from Pressman).
    - Software METHODS: "how to's for building software" (taken from Pressman) like OO, CBSE, ... and MDA (an integrative approach).
    - Software TOOLS: automated and semi-automated support for the process and methods. IDE, CASE tools are here in this part.

    1) MDA is *not* a process, it's a method: MODEL DRIVEN ARCHITECTURE -> your MODEL is the most important part, separation of concerns (CIM, PIM, PSM, CM), automatic transformations, etc.! Therefore like Brian said, you CAN and SHOULD combine MDA with any software process available like XP, RUP or what so ever... Please check out the page 21 from Pressman's book to see how *a common process framework* looks like. It can be instantiated to fullfil your needs. So, again MDA itself does not have anything to do with XP, RUP, Agile development process.

    2) MDA is *not* a CASE tool but can be supported with CASE tools.

    You see, it is easier if you "separate the concerns" ;-) PROCESS, METHODS and TOOLS are also an example of separation of concerns ;-)

    Cheers,
    Lofi.
  53. MDA roles: PROCESS, METHODS, TOOLS[ Go to top ]

    MDA is *not* a CASE tool but can be supported with CASE tools.You see, it is easier if you "separate the concerns" ;-) PROCESS, METHODS and TOOLS are also an example of separation of concerns ;-)Cheers,Lofi.

    You can only seperate concerns if those concerns are indeed seperate. When those concerns are intimately linked then you require a more holistic approach. Agile methodoligies do not fit comfortably into the "PROCESS" box as you describe. They rely upon supporting tools, skills, methods, organisational cultures, management practices etc. All these thinga are linked. They are people things. The first line in the Agile manifesto is "value interactions and individuals over processes and tools" so by definition Agile is not just about process.

    MDA is not Agile, because it doesn't support and re-inforce agile values:

    http://agilemanifesto.org/
  54. MDA is not Agile, because it doesn't support and re-inforce agile values:http://agilemanifesto.org/

    You aren't actually quoting agilemanifesto.org. Allow me:
    "I think the executable model (aka Agile MDA) is a great idea. This is my first time on this site. The white paper on Agile MDA on OMG site led me here."
    http://agilemanifesto.org/sign/display.cgi?ms=000000027
  55. MDA is not Agile, because it doesn't support and re-inforce agile values:http://agilemanifesto.org/
    You aren't actually quoting agilemanifesto.org. Allow me:
    "I think the executable model (aka Agile MDA) is a great idea. This is my first time on this site. The white paper on Agile MDA on OMG site led me here."
    http://agilemanifesto.org/sign/display.cgi?ms=000000027

    Hi Brain, I can see my attempts to educate you are failing :^). Not wanting to draw rank, but I actually earn my money as an Agile Coach.

    Agility is about change. A group of seasoned practitioners (with the emphasis on practitioners) got together and identified a group of common values and principles that they all agreed supported rapid change. If the utopian view of MDA (T-MDA) proves to be viable then you could argue that T-MDA is in fact agile. Given that E-MDA is the only viable choice today (and ever if you ask me) then MDA cannot really be considered Agile (the agile principle of simplicity comes to mind).

    BTW: If the best thing you can find to support your argument is a quote from a miss-guided visitor to the agile alliance website, then boy you are really struggling. How about coming up with a quote from a signatory to the agile manifesto itself?

    I guess that could prove be a bit difficult :^).
  56. First Brian wrote:
    Agility is the ability to respond to changing requirements quickly.

    Then Paul parrots:
    Not wanting to draw rank, but I actually earn my money as an Agile Coach. Agility is about change.

    It appears we both fully understand development agility. And yet we still disagree on how agility regards MDA. I infer that what differs is our understanding of MDA, which is something I've said twice before.

    The chat between us three (Lofi too) has been great, but never once did you expose my understanding of MDA as incomplete. Quite the opposite; many times I had to address gaps in your understanding of very basic MDA theory and practice. The basic roles, even the precise meaning of the word 'architecture', were core MDA concepts I had to explain in response to various strawmen arguments and prejudgements.

    In the face of this knowledge gap, particularly awkward and premature were your claim that MDA isn't "promising", and that MDA isn't agile, violates agile principles, and supposedly has been rejected by agile's authorities. You couldn't cite anything that made similar claims, and I even mentioned the agile MDA whitepaper at omg.org. Do you respect the academic integrity of OMG?
    How about coming up with a quote from a signatory to the agile manifesto itself?
    Paul, I respect your expertise at agile, and I'd be proud to handcode with you. But your willful blindspot for MDA prevents me from taking you seriously in this thread. Steve Mellor (the inventor of model compilation) is one of the Agile Manifesto's few signatories. Anyone who wants to learn more about agile MDA could read:

    Steve Mellor's "Agile MDA" chapter contributed to the book The MDA Journal.

    Scott Ambler's "A Road Map for Agile MDA" at AgileAlliance.org, which "Presents a realistic strategy for taking an agile approach to the OMG's Model Driven Architecture (MDA) which leverages the principles and practices of Agile Modeling." Do you respect Scott Ambler?

    OMG's Agile MDA whitepaper.
  57. Hi Brian,

    I hope you view this as a light hearted discussion, with a view to testing each others ideas and perhaps learning something I do. You're right my knowlege of MDA is limited, but my experience of Software development is not. Let me quote form the article you kindly provided:
    To some, the notion of putting “agile” and “modeling” in the same sentence makes no sense
    Yeap, that's me :^)

    <One reason for the disconnect is the recognition of the verification gap. This gap comes about
    when we write documents that cannot execute.

    Yeah, me again, so how do you bridge the verification gap ?
    To reach this happy state, models must be complete enough that they can be executed standing
    alone. There are no “analysis” or “design” models, because all models are equal. Models are
    linked together, rather than transformed, and they are then all mapped to a single combined model
    that is then translated into code according to a single system architecture. This approach to MDA
    is called Agile MDA.

    OK. So MDA is only Agile MDA if your PIM model can be translated directly into code. So Agile MDA only aplies to T-MDA (didn't I say this too?).

    The question is then, what level of information do you need in your PIM to do effective T-MDA? For reasons I've explained before (deterministic, mathematically proven transforms), you will need to be Modelling at a pretty detailed level (low level) using something like UML2.o. And as I've also said before if that is what you are doing then you are infact programming, and there are much better programming languages out there then UML2.o

    Let's face it Brian we may never agree. I have learn't alot about MDA though. Do yourself a favor and take a look at the link I provided for Croquet. It will give you an idea of what is possible using a different approach. You may be surprised!
  58. So MDA is only Agile MDA if your PIM model can be translated directly into code.
    The part of OMG's Agile MDA whitepaper that says it best is:
    "Agile MDA is based on the notion that code and executable models are operationally the same. Hence, the principles of the Agile Alliance (testing first, immediate execution, racing down the chain from analysis to implementation in short cycles, for example) can be applied equally to models. An executable model, because it is executable, can be constructed, run, tested and modified in short incremental, iterative cycles."
    I totally get a warm fuzzy from that.
    Do yourself a favor and take a look at the link I provided for Croquet. It will give you an idea of what is possible using a different approach. You may be surprised!
    Given the trend toward grids, p2p, teleimmersion, and remote collaborative e-science, Croquet is a good idea. I dig that the video you gave mentions Carl Hewitt's actor-based languages, which fascinated me in grad school over a decade ago. Shlaer-Mellor is also actor-based. Ie, an object can have its own thread, and objects can queue messages for eachother. Something Croquet and MDA have in common is a raising of the bar for objects. Both endow objects with more mechanisms than traditional OO objects have. Eg, MDA objects have formalized state machines, etc.
  59. T-MDA is based on E-MDA[ Go to top ]

    <quote>
    OK. So MDA is only Agile MDA if your PIM model can be translated directly into code. So Agile MDA only aplies to T-MDA (didn't I say this too?).
    </quote>

    yes this is correct as Mr. Mellor's definition of Agile MDA (see the link from Brian).

    But we should not forget that the details of E-MDA build up the T-MDA. At the end the T-MDA needs to compile the models into a running application(s), so we ("vendors of model compilers") need to know the details of how this can be achieved productively. You surely won't write a model compiler which compiles your model into assembly codes directly... :-) Instead it should translate your models into complete Java programs and they can run directly (without programming a single line in Java).

    So, the development evolution will be like this:
    E-MDA -> T-MDA.

    I remember in older days of AndroMDA, that you need to write Hibernate Query or EJB EB Query to do query of your Entities in your UML models. So if you want to change your persistent layer you need to rewrite those queries in your UML models. Not good. But now AndroMDA offers you 2 translation libs (it can be extended as well), so you model your queries with OCL (independent of Hibernate and EJB EB) and AndroMDA will translate the queries into the correct Hibernate Query or EJB EB Query, depends on your transformation definitions. Very cool! Check this out:
    http://www.andromda.org/andromda-translation-libraries/index.html

    So, if you see this, you will see that it's a matter of time (you can also help to make it faster! ;-)) that we'll have an action language integrated within AndroMDA which can be translated 100% into Java or so..., so that we can "translate" the models into running applications, just like what T-MDA specifies.

    To be able to reach this goal we need to understand E-MDA first... therefore: E-MDA -> T-MDA.

    Cheers,
    Lofi.
  60. ...we'll have an action language integrated within AndroMDA which can be translated 100% into Java...
    That would be the final nail in hand coding's coffin. UML-2.1 Actions are awesome, especially since the abstract syntax allows for a visual concrete syntax. Ie, flow diagrams. This truly is Turing-complete visual programing on par with the ProGraph language.
  61. ...we'll have an action language integrated within AndroMDA which can be translated 100% into Java...
    That would be the final nail in hand coding's coffin. UML-2.1 Actions are awesome, especially since the abstract syntax allows for a visual concrete syntax. Ie, flow diagrams. This truly is Turing-complete visual programing on par with the ProGraph language.

    In my opinion, flow diagrams tend to encourage "if/then/else" type thinking when thinking in terms of tables or graphs or dynamic decisions trees might be much more effective.

    Diagrams showing "if/then/else" style logic are useful for communicating with non-technical people. But I wouldn't want my actual software based on it.
  62. In my opinion, flow diagrams tend to encourage "if/then/else" type thinking...
    Indeed ProGraph had a first-class language structure for it, called a "case".
    ...when thinking in terms of tables or graphs or dynamic decisions trees might be much more effective.
    An amazing thing about UML-2 action semantics is that it has an abstract syntax biased towards flow diagrams. But a design goal of the abstract syntax is to accomodate a variety of Turing-complete languages. All of the existing UML-2 action languages that I know of are textual. An abstract syntax that flexible could easily handle the other visual processing paradigms you mention. Eg, almost countless languages have been reduced to JVM bytecode, which is itself an abstract processing syntax. You mention tree processing. Have you heard of Aardappel?
    Diagrams showing "if/then/else" style logic are useful for communicating with non-technical people. But I wouldn't want my actual software based on it.
    The manifest destiny of software engineering is a quest for intuitive abstraction. If a syntax is accessible to non-techies, it likely is more intuitive, more abstract, and imposes less cognitive load. This has an evolutionary advantage. Were this not so, we'd still be hacking C, which is phenomenally expressive (eg, it has a 'register' keyword). The nature of software engineering is that abstraction (eg, the JLS doesn't mention registers) increasingly trumps expressiveness. There's an economic imperative at work here. And MDA exploits this well.
  63. Were this not so, we'd still be hacking C, which is phenomenally expressive (eg, it has a 'register' keyword). The nature of software engineering is that abstraction (eg, the JLS doesn't mention registers) increasingly trumps expressiveness. There's an economic imperative at work here. And MDA exploits this well.
    Hi Guys,
    I think we are hitting the crux of the difference between our views. I wonder if you watched all of the Croquet video. What Alan Kay had to say IMO is more important than Croquet itself. Croquet was implemented buy just 3 developers. I've been following the project and all that you see was done in under a year.

    Alans Kay's biggest blast was for the education system. In his day at University students where taught to build things: Processors, Operating Systems even Programming Languages. Hence they had a deeper understanding and could make better design trade offs at all levels of abstraction.

    A couple of examples: Current processors are about a 1000 times slower when it comes to running VMs and late bound languages then the Alto was in the seventies (like-for-like micro-electronics technology). Why? Another is that most graduates today don't know how to hand code efficient C.

    Does this matter? IMO yes. We have tried to dumb down software development, and every time we do it we fail. EJB was an attempt to use a "template" approach to enterprise development. The problem was that the template didn't allways fit. The "dumbed down" environment was over kill for some projects and not flexible enough for others. So skilled developers started to roll their own abstractions to solve their own problems, hence the growth of things like Spring and Hibernate. We also noticed that the roll-your-own approach was actually simpler than using the dumbed down "Template".

    So the issue is how do you give people the power to come up with optimal abstractions? Well the first step is to educate them. We all can't be as good as the Croquet programmers, but this level of skill should be the ultimate goal for all of us.

    We need to stop premature freezing of design patterns into inflexible "Templates" that are then "sold", and incourage better core programming skills.

    People will come up with their own abstractions to solve their own specific problems, which will lead to more optimal solutions.

    As for re-use, this should be mined bottom-up. Stable abstractions that have stood the test of time, become good candidates for entry into a domain specific library (open source probably) for use all to use.

    We need skillful developers who understand how to manage compexity through abstraction. They need to know how to create good abstractions using pure OO concepts and late binding techniques. For this we need late bound, dynamic languages that have simplicity and conceptual integrity at their heart (easy to teach). This becomes the platform for higher level application/domain specific scripting languages that are tailored to end-users (and less skilled programmers) not UML2.o. Adding scripting capabilities in a late bound architecture is easy.

    IMO dumbing down core development just doesn't work. I can think of numerous examples where it has been tried and failed.
  64. We have tried to dumb down software development, and every time we do it we fail.
    If this were true (but it isn't), then silly attempts to de-skill assembly language, such as the introduction of C, would never have succeeded. Software engineering has major breakthroughs in de-skilling roughly every decade. The JVM is itself an immense stab at de-skilling -- it manages memory, it checks array indices, it verifies typecasts as they happen. This means code monkeys who weren't good it managing memory, array indices, and typecasts can now be productive in Java, when they had a severe competitive handicap in C/++. Aspects of software engineering are continually being de-skilled according to the economic imperative I mentioned. You can't stop progress, especially not technological progress.
    They need to know how to create good abstractions... For this we need late bound, dynamic languages...
    Static typing is one of the greatest quality boosts in the history of software engineer. There are two reasons why. First it's cheaper and quicker to catch bugs at build time. "Cheaper and quicker [to approach zero defects]" is exactly the economic imperative of the software industry -- always has been and always will be. Script kiddies mysteriously get by without static typing, but mission-critical software would sorely miss it. Whole categories of bugs are prevented by a source compiler or bytecode verifier.
  65. We have tried to dumb down software development, and every time we do it we fail.
    If this were true (but it isn't), then silly attempts to de-skill assembly language, such as the introduction of C, would never have succeeded. Software engineering has major breakthroughs in de-skilling roughly every decade. The JVM is itself an immense stab at de-skilling -- it manages memory, it checks array indices, it verifies typecasts as they happen. This means code monkeys who weren't good it managing memory, array indices, and typecasts can now be productive in Java, when they had a severe competitive handicap in C/++.

    This is the point I've been making about education. The breakthroughs you claim for the JVM have been present in LISP since the 1950's (runtime memory mangement, array bounds checking, type checking, etc).

    They need to know how to create good abstractions... For this we need late bound, dynamic languages...
    Static typing is one of the greatest quality boosts in the history of software engineer. There are two reasons why. First it's cheaper and quicker to catch bugs at build time. "Cheaper and quicker [to approach zero defects]" is exactly the economic imperative of the software industry -- always has been and always will be. Script kiddies mysteriously get by without static typing, but mission-critical software would sorely miss it. Whole categories of bugs are prevented by a source compiler or bytecode verifier.


    Again another common misconception. C was a loosely typed language (remember void and void* ?). C++ decided to use strong typing on the back of ideas comming from languages like ADA (A military, mission critical high level language).
    For high level application programming, where the effect of runtime errors are not "mission crtitical", then people have known for a long time that late bound dynamically typed languages offer advantages. Again this was pioneered in LISP in the 50's and 60's.

    Dynamically typed languages do perform type checking, but this type checking is performed at runtime (much like the dynamic type cast checking that you describe in the Java JVM). This approach has one major advantage. Software components are bound at runtime. This mean that you can do things like treating sections of code as a variable and you can modify your code at runtime (changing the program whilst it's still running).
  66. <quote>
    The manifest destiny of software engineering is a quest for intuitive abstraction. If a syntax is accessible to non-techies, it likely is more intuitive, more abstract, and imposes less cognitive load. This has an evolutionary advantage. Were this not so, we'd still be hacking C, which is phenomenally expressive (eg, it has a 'register' keyword). The nature of software engineering is that abstraction (eg, the JLS doesn't mention registers) increasingly trumps expressiveness. There's an economic imperative at work here. And MDA exploits this well.
    </quote>

    100% agree with you Brian! And I suppose we all agree that we need higher abstraction level. MDA is just an "enabler" for this purpose.

    BTW. I really like the concept of separating the abstract and concrete syntax in Action Semantics/QVT/... Still we need a "good" concrete (surface) textual syntax and IMO it should look similar to Java to make the transition easier. Just like from C -> C++ -> Java :-)

    Cheers,
    Lofi.
  67. <quote>The manifest destiny of software engineering is a quest for intuitive abstraction. If a syntax is accessible to non-techies, it likely is more intuitive, more abstract, and imposes less cognitive load. This has an evolutionary advantage. Were this not so, we'd still be hacking C, which is phenomenally expressive (eg, it has a 'register' keyword). The nature of software engineering is that abstraction (eg, the JLS doesn't mention registers) increasingly trumps expressiveness. There's an economic imperative at work here. And MDA exploits this

    I think there is just a basic mis-understanding here. C is good and still relevant. What other language would you want to write your hardware independent VM in? Saying that C is bad is like saying that micro-code or the transistor is bad. They are all needed.

    We want to use a language whose syntax and run-time are better suited to application development. Well such languages have exisited since the mid-seventies, look at Smalltalk. Some would argue that they existed even before that with LISP. Not being a LISP guru, they could be right.
    I really like the concept of separating the abstract and concrete syntax in Action Semantics/QVT/... Still we need a "good" concrete (surface) textual syntax and IMO it should look similar to Java to make the transition easier. Just like from C -> C++ -> Java :-)Cheers,Lofi.
    It is this type of thinking that IMO defines all that is wrong with modern IT. We have processors whose architectures are based on a 4-bit washing machine controller(4040->8080-> etc). We have so called object oriented languages, based on a universal procedural assembler - "C". All in the name of backward compatibility. Our last saviour Oak was developed to program toasters before it was re-christened Java and marketed. Just because people didn't want to learn a new syntax.

    All this short-termism. Why? We all want someone else to solve the computing crisis for us. We want the vendors to do it. When we sit down to program a computer we are GOD. We are responsible for the abstractons we choose, we are also responsible for understanding how those abstractions interact. If we do not understand these things then we are lost. There is no GOD inside the Machine, there is no GOD within your MDA run-time.

    So when everyone leaves University only knowing how to program in QVT who will write and maintain your MDA-runtime? who will optimise your VM or your Processor? to match your run-time? Who will build everything needed for Agile MDA? Not the vendors, they will be hiring from the same Universities as everyone else.

    We need to get back to basics and take ownership for solving these problems ourselves. When we do we will realise that we've already got the tools we need today.
  68. Paul,

    I think there is a fundamental flaw in your arguments. You say that the transistor is needed, as C is too, but I disagree: they are optional, or better yet, they have their niche use. The transistor was needed (unavoidably) before ICs where invented. C was needed (unavoidably) before higher level languages were invented. Of course each has is use today, but when it comes to more complex problems, you'll need higher abstractions to deal with them. Imagine building a Pentium IV CPU using plain old transistor components: it is impossible today, that's why ICs do exist. The same thing with computer languages: imagine coding a complete J2EE server in assembler from scratch, with all the portability it has today: it is impossible too without higher level artifacts (OO, a VM, libraries and all).
    So yes, C is good and relevant, as the transistor is too, but to cope with more complex problems, we need more abstract solutions. The older solutions which solved old problems will still be used, but the natural evolution is towards more complexity, thus as valves were substituted for transistors and then for ICs, the same will happen with computer languages, as we go from assembler to C to Java to whatever is being devised by today's pionners. I wouldn't disregard MDA just now, in light of what has happened in computer science since its begginings. Maybe it is hype today, maybe it falls short of any and all we need now, but the potential is there, and evolution towards such a solution for more complex problems in the future may lie there. Not a solution to all problems, but to more complex problems.
    Who will be responsible for maintaining whatever lies under all that abstraction? Well, the same people who maintains JVMs and ICs today. I bet very few people, of all the Java programmers in existence, had to deal with their JVM code somehow. We just expect it to work and it does (well, most of the time at least...:). The same way with you CPU, we don't have to worry about it, if it break we upgrade it and it's done. No need to grab your soldering iron today to fix your computer.
    Higher abstractions are unavoidable, it is just a matter of time until someone invents the solution which will take us all to a new level of thinking, the same way it happened when we jumped from assembler to C, for example. I am not saying that this new solution will solve all the problems, just as there are a few people coding assembler today (because they need it, not because they want it), and lots coding C. I am saying that for the more complex systems we'll have to create in the future, new solutions will have to appear in order to allow us to at least be able to solve them in a more practical way.

    Regards,
    Henrique Steckelberg
  69. Paul,I think there is a fundamental flaw in your arguments. You say that the transistor is needed, as C is too, but I disagree: they are optional, or better yet, they have their niche use. The transistor was needed (unavoidably) before ICs where invented. C was needed (unavoidably) before higher level languages were invented. Of course each has is use today, but when it comes to more complex problems, you'll need higher abstractions to deal with them. Imagine building a Pentium IV CPU using plain old transistor components: it is impossible today, that's why ICs do exist. The same thing with computer languages: imagine coding a complete J2EE server in assembler from scratch, with all the portability it has today: it is impossible too without higher level artifacts (OO, a VM, libraries and all).So yes, C is good and relevant, as the transistor is too, but to cope with more complex problems, we need more abstract solutions. The older solutions which solved old problems will still be used, but the natural evolution is towards more complexity, thus as valves were substituted for transistors and then for ICs, the same will happen with computer languages, as we go from assembler to C to Java to whatever is being devised by today's pionners. I wouldn't disregard MDA just now, in light of what has happened in computer science since its begginings. Maybe it is hype today, maybe it falls short of any and all we need now, but the potential is there, and evolution towards such a solution for more complex problems in the future may lie there. Not a solution to all problems, but to more complex problems.Who will be responsible for maintaining whatever lies under all that abstraction? Well, the same people who maintains JVMs and ICs today. I bet very few people, of all the Java programmers in existence, had to deal with their JVM code somehow. We just expect it to work and it does (well, most of the time at least...:). The same way with you CPU, we don't have to worry about it, if it break we upgrade it and it's done. No need to grab your soldering iron today to fix your computer.Higher abstractions are unavoidable, it is just a matter of time until someone invents the solution which will take us all to a new level of thinking, the same way it happened when we jumped from assembler to C, for example. I am not saying that this new solution will solve all the problems, just as there are a few people coding assembler today (because they need it, not because they want it), and lots coding C. I am saying that for the more complex systems we'll have to create in the future, new solutions will have to appear in order to allow us to at least be able to solve them in a more practical way.Regards,Henrique Steckelberg

    Hi everything you say is correct. I don't think I made my point that well. The processor in the computer I'm using now contains transisitors and the JVM I'll be using in a minute was probably written in C. I just think that an appreciaton of this is important. My concern is that people some how believe that this stuff has gone away just because they do not have to deal with it on a day to day basis.

    Understanding how a compiler works is significant to your understanding of something like MDA. The idea that someone else needs to understand this stuff and you can just use it means that you are at the mercy of these 'other people'.

    I don't want to be sold to. I want to decide what technology is best to solve my problem. I feel I know what that is.

    I'm not sure if many managers or developers have an idea of what technology they should be using. Instead they use what ever their vendor chooses to sell them. I can see how something like MDA is a great gift for vendors, I'm just not sure that it is such a great idea for most projects.

    I think the point I was trying to make is buyer beware. And I guess you can't beware if you've given up all responsibility for understanding the underlying technology (abstractions).
  70. I igree with you partially. Of course knowing how a computer works helps you code better, OTOH, not knowing it shouldn't hinder you from being able to do it too, at least to solve 99% of the problems! That is, this knowledge is optional, not mandatory. Most people don't know how VMs work in detail, but that doesn't hinder them from using it, and unless you are tackling some pretty complex solutions involving distributed, very high performance or complex concurrency situations, one shouldn't ever know how VMs work. How many people know how a combustion motor works? Most people drive cars everyday without even a slight notion of what's going on under the hood!
    Maybe now that MDA is such a novelty, having this inner knowledge may not be optional, but as the technology evolves, such need will fade, and we'll probly be using models to create systems as naturally as we do with code today. I can't think of anything that could stop this evolution to eventually happen.

    Regards,
    Henrique Steckelberg
  71. Henrique,

    always nice to hear your comments in TSS discussion...
    No additional comments here, just not necessary, I agree with you! ;-)

    Cheers,
    Lofi.
  72. I igree with you partially. Of course knowing how a computer works helps you code better, OTOH, not knowing it shouldn't hinder you from being able to do it too, at least to solve 99% of the problems! That is, this knowledge is optional, not mandatory. Most people don't know how VMs work in detail, but that doesn't hinder them from using it, and unless you are tackling some pretty complex solutions involving distributed, very high performance or complex concurrency situations, one shouldn't ever know how VMs work.
    I didn't say we should know how VMs, Compliers, Processors etc work in detail. I said that we should at least have an apreciation of how they work.
    How many people know how a combustion motor works? Most people drive cars everyday without even a slight notion of what's going on under the hood!

    Fortunately, Car manufacturers do not control the Roads, so open and fair competition tends to sort out good Cars from bad ones. With software things aren't that simple. Is Visual Basic a great language? But for a long time it was the most widely used. Why?
    Maybe now that MDA is such a novelty, having this inner knowledge may not be optional, but as the technology evolves, such need will fade, and we'll probly be using models to create systems as naturally as we do with code today. I can't think of anything that could stop this evolution to eventually happen.Regards,Henrique Steckelberg

    History doesn't support this assertion. Ask most people who invented the VM they would say Sun when they released Java. Ask most people whether dynamic languages are preferable to static ones and they would say I don't know I've only ever used a static language. Ask most people why LISP was an important milestone in language development and they would answer "What's LISP?" When Java was released, no one was interested in "inner knowledge". The selling point was that it looked like C/C++, didn't leak memory, was cross platform at runtime and was some how tied to the "Web" buzz (remember all that sandbox stuff and applets?). How many people explored other options? Suns Marketing blitzed everything else.

    So how do we make technology selection choices? If the developers don't appreciate the issues and trade offs involved, then what are the chances that their managers do? If neither the developers or the managers know, then who is infact making these choices?

    It has been mentioned in this thread by someone else, but I too have experienced projects where the tools and the method where defined by a big J2EE vendor. The vendor then sold consultants to "teach" the team how to use the tools and the method. The tool was graphical and the main sales pitch was that it would "dumb down" development and thus reduce development costs (cheaper, less skilled developers). The architect would do most of the clever stuff using the graphical tool.

    How did you think that project went? Do we really understand the issues and the choices facing us? Or are we just waiting for the vendors to turn up and save us from ourselves?

    Should Universities be producing people "Java Certified" with little understanding of other languages and broader programming concepts? If you look at recent history all the break throughs in Java development have come when developers not vendors have taken the lead. Also many of these breakthroughs have been inspired by other languages and are not "Java Certified" by Sun.

    Living in a world where 99% of people are blind and must trust the remaining 1% doesn't sound like a recipe for success.

    Then again that's the world we living today. So no wonder we are just waking up to the fact that we should be using technologies pioneered over 30 years ago (dynamic typing, block closures, continuations etc).

    Paul.
  73. Have you heard of Aardappel?

    Looks interesting, but any serious thought about it will have to wait until the weekend...
    The manifest destiny of software engineering is a quest for intuitive abstraction.

    Hmmm...not quite. The manifest destiny of software engineering is a quest for an efficient way to build, maintain, and leverage abstractions.

    Giving a software engineer superior abstractions will accelerate his development process. Enabling a software engineer to more effectively produce abstractions will allow him to accelerate his own development process.
    The nature of software engineering is that abstraction (eg, the JLS doesn't mention registers) increasingly trumps expressiveness.

    I disagree. The nature of software engineering is that abstraction increases expressiveness.

    C++ code can be significantly more abstract and cleaner than Java code, despite the fact that it provides so many "low level" facilities. The problem with C++ isn't that it forces you to think about low level details, it's that programmers seem drawn to low level details like a moth to a flame.

    I'm not advocating programming in C++, I think the language has problems, and it's lack of a rich standard library is unacceptable. My point is, nobody is forcing C/C++ programmers to use the "register" keyword, or any of the other extremely low-level facilities in the language. But if for one reason or another you need that facility, you will be glad it's there.
    If a syntax is accessible to non-techies, it likely is more intuitive, more abstract, and imposes less cognitive load

    A good language should liberate the programmer. Yes, this means it shouldn't impose a large cognitive load, but it also means it shouldn't impose cognitive barriers. Java has a lot of barriers (no multiple inheritance, no operator overloading, weak metaprogramming, no templates, no scoped resource allocation) but doesn't impose too much of a cognitive load. C++ has few cognitive barriers (core language, it's lack of good standard libraries is a barrier) but imposes a huge cognitive load.

    So I want a language that gives me the low level capabilities of C++ without making me think of them and the cognitive freedom and lightness of Python or Ruby. (BTW - I consider a lack static type checking to increase the cognitive load, not decrease it, because it makes the compiler check my work for me).

    What I don't want is a language (modelling or otherwise) that tells me "you don't really understand X, so you can't do X."
  74. ...abstraction increases expressiveness.
    That's patently wrong. How much computer science formal education did you get at university? Abstraction and expressiveness are polar opposites, by their definitions. There is always a tradeoff between the two, with software engineering's sweetspot always evolving away from expressiveness and toward abstraction.
  75. Programming Language Expressiveness[ Go to top ]

    ...abstraction increases expressiveness.
    That's patently wrong. How much computer science formal education did you get at university? Abstraction and expressiveness are polar opposites, by their definitions. There is always a tradeoff between the two, with software engineering's sweetspot always evolving away from expressiveness and toward abstraction.

    If you define a language's expressiveness by it's turing completeness, then both of us are wrong, because pretty much every modern language is turing complete, from assembly to Java to Ruby. No points or demerits for changing abstraction levels.

    Personally, I like this definition:
    The language decision must recognize the difference between two issues: expressibility and expressiveness. Nearly every programming idea can be expressed in your favorite language. Expressiveness is about how well a language maps its solution space to the problem space. The ability to express an idea is called expressibility. The ease of expressing that idea is called expressiveness. For example, we might be able to compute Bessel functions in Lisp, but Fortran better expresses mathematical functions than Lisp. Lisp is more expressive of ideas related to artificial intelligence.

    http://www.stsc.hill.af.mil/crosstalk/2003/02/riehle.html

    So expressiveness is measured relative to the problem domain and is not an absolute value. The close a language is to the problem domain, the more expressive it is for that problem domain.

    Object oriented programming allows us to define new abstractions within a language in order to bring it closer to the problem domain (so does functional programming, but to a lesser extent).

    Consequently, new abstractions increase the expressiveness of a language.

    This kind of boxes us into the same corner as the "turing complete" definition - most langauges provide a means of defining an infinite number of new abstractions, so doesn't that make most languages equally expressive?

    So let me refine the definition: Expressiveness is the combination of the availability of useful abstractions and the ease with which new abstractions can be defined in the language to bring it closer to the problem domain.

    If you have a better definition, please post it.
  76. Not wanting to draw rank, but I actually earn my money as an Agile Coach. Agility is about change.
    Do you respect Scott Ambler?

    What are you guys? Lawyers? Aren't we supposed to use evidence and logic, not citations to prove points? (side note: my wife is a lawyer, and everything always seems to be about precedent, which boils down to: How many grumpy old men agree with my argument?)

    Brian, the fact of the matter is that many, many software developers out there have been burned in one way or another by vendors claiming to drop the programmer out of the equation by letting the analyst (or even the domain expert) create an enterprise system by dragging and dropping pictures.

    The biggest problem with MDA (and CASE before it) is that it gets sold as a silver bullet.
    The basic roles, even the precise meaning of the word 'architecture', were core MDA concepts I had to explain in response to various strawmen arguments and prejudgements.

    So stop using those terms. "Architecture" and "Architect" are two of the most abused titke inflators in software. Your definition of "architect" is certainly different than mine (yours is a subset of mine, really).
    In the face of this knowledge gap, particularly awkward and premature were your claim that MDA isn't "promising", and that MDA isn't agile, violates agile principles, and supposedly has been rejected by agile's authorities.

    So close the knowledge gap.

    The bottom line is that MDA isn't quite ready yet, and it has stiff competition from dynamic languages and from the gradual infusion of metaprogramming into Java. All of those face competition from the status quo.

    What I'd like to see is an example of a system (w/src) done using MDA that is full of all the quirks that we see everyday. Schemas that need to be specially tweaked for performance. Convoluted business logic. Pie-in-the-sky demands on the GUI. Integration to legacy systems. Algorithms that need heavy performance optimization. Countless others...

    So find a demo application like this, and I'm sure lots of people will take a look at it. None of the canned stuff that doesn't push the envelope of what the technology can do "out of the box," but something that requires the developer to extend the technology.
  77. So stop using those terms. "Architecture" and "Architect" are two of the most abused title inflators in software. Your definition of "architect" is certainly different than mine...
    That's a key difference between MDA and ad hoc traditional development. Traditionally the words 'architect' and 'architecture' were ill-defined fuzzy. In MDA these terms have precise technical meaning. The quality of debate is limited if their precise MDA meaning isn't acknowledged. Worse yet is the likelihood that participants never knew that MDA has a formal vocabulary for concepts that traditionally were never formalized. Formality is a necessary ingredient for automation, and MDA aims to automate to a degree that was traditionally unimaginable.
    The bottom line is that MDA ... has stiff competition from dynamic languages and from the gradual infusion of metaprogramming into Java.
    That's a fundamental misundestanding on your part. MDA should leverage whatever facilities exist in the target space. XDoclet was from the beginning a key piece of Andro. At the top of the technology stack is MDA, even when the substratum includes a dynamic language and/or low-level generative mechanisms such as attributes and aspects. Eg, JavaScript is dynamic and usually generated rather than handcoded. Dynamic in no way precludes generative metaprograming.
    What I'd like to see is an example of a system (w/src) done using MDA that is full of all the quirks that we see everyday. Schemas that need to be specially tweaked for performance. Convoluted business logic. Pie-in-the-sky demands on the GUI. Integration to legacy systems. Algorithms that need heavy performance optimization. Countless others...So find a demo application like this, and I'm sure lots of people will take a look at it. None of the canned stuff that doesn't push the envelope of what the technology can do "out of the box," but something that requires the developer to extend the technology.
    I've used model compilation for realtime firmware. Before MDA, firmware was the software industry's primary audience for model compilation. J2EE space is relatively new and less demanding of runtime efficiency.
  78. In MDA these terms have precise technical meaning.

    Please post it or a link. I suspect I'll think it doesn't go far enough, but defined is better than undefined.
    The bottom line is that MDA ... has stiff competition from dynamic languages and from the gradual infusion of metaprogramming into Java.
    That's a fundamental misundestanding on your part. MDA should leverage whatever facilities exist in the target space. XDoclet was from the beginning a key piece of Andro. At the top of the technology stack is MDA, even when the substratum includes a dynamic language and/or low-level generative mechanisms such as attributes and aspects. Eg, JavaScript is dynamic and usually generated rather than handcoded. Dynamic in no way precludes generative metaprograming.

    Hmmm...... I don't think you fully understand metaprogramming, either. But you still might have a point.

    IMHO, one of the big challenges of MDA is the impedance mismatch between the modelling language (usually UML) and the programming language. This all goes back to Paul's arguments about mathematical transformations and the unfortunate need for round-trip engineering. In other words, the modelling language needs to be a strict subset of the programming language, to the point where the programming language is the native representation of the model.

    I bet you could use Python to represent UML models, or at least a decent subset of them. It wouldn't be perfect, but it would be a useful demonstration...
  79. In MDA these terms have precise technical meaning.
    Please post it or a link.
    In OMG's MDA Guide, section 2.2 "The Basic Concepts" defines some core terms. Section 2.2.4:
    "The architecture of a system is a specification of the parts and connectors of the system and the rules for the interactions of the parts using the connectors. [5]
    The Model-Driven Architecture prescribes certain kinds of models to be used, how those models may be prepared and the relationships of the different kinds of models."
    [5] Software Architecture: Perspectives on an Emerging Discipline, Prentice Hall 1996.
    Dynamic in no way precludes generative metaprograming.
    Hmmm...... I don't think you fully understand metaprogramming, either.
    I wrote a JSP that used a JSP expression to dynamically generate a JavaScript statement at runtime. Fundamentally that's metaprograming a dynamic language (JavaScript).
    This all goes back to Paul's arguments about mathematical transformations and the unfortunate need for round-trip engineering.
    An executable model doesn't need round tripping. I helped prove this on a real project over a decade ago. Ivar Javobsen (co-inventor of UML) came to our office and was shown a personal demo. Agile MDA forbids round tripping. When you say "unfortunate need for round-trip engineering", it reveals how little you know about succeeding with MDA.
  80. I wrote a JSP that used a JSP expression to dynamically generate a JavaScript statement at runtime. Fundamentally that's metaprograming a dynamic language (JavaScript).

    That's code generation...and a rather trivial example at that. I suppose that's a subset of metaprogramming.
    An executable model doesn't need round tripping. I helped prove this on a real project over a decade ago. Ivar Javobsen (co-inventor of UML) came to our office and was shown a personal demo.

    The purpose of a model is to "hide the details" so people can focus on higher level concerns.

    What your saying is either:
    1. MDA eliminates the need for you to worry about the details.
    2. With MDA, the details are captured in your model.

    I flat out don't believe #1 is possible for systems that do anything even remotely unique. I can see how MDA could eliminate the need to worry about 90% of the details, and that's great, but it's the remaining 10% that contains the innovation and really provides the value for the system.

    As for #2, I think pushing all the details into the model defeats the purpose of having a model, unless of course those details can be hidden and shown at will. Furthermore, I think expressing those details in UML is more difficult than coding them in a normal programming language.
    When you say "unfortunate need for round-trip engineering", it reveals how little you know about succeeding with MDA.

    I'll agree with you there. I'm quite ignorant about succeeding with MDA.

    There's an expression: 90% of the work is done for 90% of the duration of the project.

    MDA sounds like it changes it to: 90% of the work is done for 99% of the project.

    I see how MDA can cut development time, but I don't see how it can revolutionize development.
  81. I wrote a JSP that used a JSP expression to dynamically generate a JavaScript statement at runtime. Fundamentally that's metaprograming a dynamic language (JavaScript).
    That's code generation...and a rather trivial example at that. I suppose that's a subset of metaprogramming.
    You said I didn't understand metaprogramming. I then gave what you called "a rather trivial example", which I could only do if I understood the matter. But there's a bigger point that you haven't acknowledged. A dynamic language allows for very sophisticated/flexible metaprograming within a static codebase; that's what you originally meant. But I can layer on top of this an even higher level of metaprograming abstraction: code generation. And yes, "code generation" and "generative metaprograming" are the same thing. MDA, an example of generative metaprograming, can sit atop whatever metaprograming you can dream of (JavaScript/Self prototypes, Ruby DSLs, etc) and still add additional value. As I crudely hinted at, you need only do something as simple as emit JavaScript from JSP, and that can amount to layering metaprograming (generative JSP) on top of metaprograming (JavaScript prototype-based coding).
    I think pushing all the details into the model defeats the purpose of having a model, unless of course those details can be hidden and shown at will.
    MDA isn't "pushing all the details into the model". That you would claim this shows you've never done MDA. Eg, with UML I can chose a Persistant stereotype. This completely hides the detailed mechanics of persistance. I may plug in a backend that persists to a relational database. I may later swap it out for a backend that persists to flat files or an XML database. So MDA has provided a valuable abstraction. As for "details can be hidden and shown at will", MDA has a standard facility called Query/Views/Transforms (QVT) that cover this, but it is optional -- MDA works fine without it.
    Furthermore, I think expressing those details in UML is more difficult than coding them in a normal programming language.
    UML-2 action language is easier to program than handcoding languages. This is so since action language is a DSL for manipulating instantiated model contents. The level of abstraction is higher and this confers ease. Eg, I may be debugging a race condition in a rendezvous of two actors. MDA/UML lets me do this through behavioral models, which action language is part of. With MDA I can singlestep animate the sequence diagrams, state diagrams, activity diagrams, etc. This is incredibly agile. But Java abandons me in a wasteland of threads, stacks, synchronized blocks, etc, which is painful to debug in Eclipse. Java's cognitive load and other strain is immense. Ie, UML is easier to program than handcode, which is the opposite of your claim.
  82. ...Do you respect Scott Ambler?

    No, I have no respect for Scott Ambler whatsoever.
  83. Agilility; MDA roles.[ Go to top ]

    The roles you describe just aren't agile. ... This is definately NOT Agile.
    Google disagrees with you and has 12,500 hits for the quoted phrase "agile mda", including pages on reputable sites such as omg.org, businessweek.com, ieee.org, agilealliance.com, agilemanifesto.org, sdmagazine.com, bptrends.com, ambysoft.com, agilemodeling.com, amazon.com, and barnesandnoble.com. There, that's a heap of cites indicating that MDA is agile. I notice that you smeared MDA as not agile, yet refuse to give a citation. Could it be just maybe you're awareness level isn't what you assume it is?
    When does the PIM designer get feedback on the "strain he is placing on the generator and the PSM? When does the PSM generator designer gain feedback on the type of optimisations needed to support a given PIM? How is the PIM designer empowered when his PSM generator fails?
    How is a Java developer empowered when his javac, the JVM, or his IDE fails? And does the failure mean that Java can't be agile? I mean, come on!

    You almost seem to misunderstand the meaning of development agility. It has nothing to do with a feedback cycle between the developer and his (model or Java) compiler. Agility is the ability to respond to changing requirements quickly. When TheServerSide held its MDA shootout, I went into detail about how MDA is more agile at this than traditional handcoding, and surprise, surprise, an employee of TheServerSide/TheMiddlewareCompany agreed my points deserved investigation. MDA is fundamentally more agile than traditional development, and I love to chat about it. The MDA role of analyst is perhaps the most agile developer role ever invented.
  84. MDA roles.[ Go to top ]

    Those two roles are both developers, but they're very disimalar -- skills, focus, formalisms, social nature -- all different. The architect hand codes. The analyst doesn't. The analyst cares about knowledge transfer (from non-technical subject matter experts). The architect doesn't. The anylist can do Business Process Reengineering. The architect can't.

    The architect you describe sounds like a junior programmer with an inflated title.
    MDA's splitting of development into two roles explicitly reaps the benefits of specialization (higher productivity, reduced labor bill, etc). A uniform crew of traditional code monkeys can't. A big benefit of this specialization is that for many development teams MDA can eliminate the role of resident architect. MDA presumes that architectures (templates and runtime libraries) are pluggable and can be bought shrinkwrapped and ready to use. Ie, a development team might make do without an architect. The team's developers can all be analysts focused on gathering domain knowledge and modeling it.

    Experience tells me that creating this rigid of division between the "solve the business concerns" and the "solve the technical concerns" roles is a recipe for disaster, assuming there's no person supervising the people in the two roles that thoroughly understands both.

    Additionally, sharp divisions of labor don't scale down well to small projects or organizations. The average person can't perform well when working more than a few projects simultaneously. If all your people are specialists, and your business changes from having ten $100 million projects to one hundred $1 million projects, you're going to be screwed.

    I can clearly see how MDA could be a wonderful tool for increasing developer productivity, but selling it as a replacement for deep technical expertise is just selling snake oil.

    The minimum staff for any non-trivial software project is a person who thoroughly understands both the problem domain, the technology, budgeting, and scheduling. From that point on you can add people with increasing specialization, but you can never remove the person with a holistic understanding without putting a project at serious risk.
  85. MDA roles.[ Go to top ]

    ...creating this rigid of division between the "solve the business concerns" and the "solve the technical concerns" roles is a recipe for disaster...
    The universal existance of a corporate promotional ladder and chain of command suggests that your fear is ungrounded. Specialization is the norm, not a risky experiment.
    Additionally, sharp divisions of labor don't scale down well to small projects or organizations.
    What you say is generally true in the workplace, but not so with MDA. the MDA not only splits development into two roles. With small engagements MDA also eliminates the resident architect's role and relies on prepackaged infrastructure. This means that MDA could boost the productivity of small projects and organizations.
    ...you can never remove the person with a holistic understanding without putting a project at serious risk.
    That's bullshit. The history of software engineering has been mostly about increasing abstraction and automation. A consequence of this has been the mainstream extinction of many glorious developer roles. First the assembly hacker mostly vanished. Then the C/++ grunts were relegated to firmware and other low level shit. The next to be dislodged from prominence might be the Java code monkey. I wasn't coding Java ten years ago, and I don't expect to a decade after now.
  86. MDA roles.[ Go to top ]

    The next to be dislodged from prominence might be the Java code monkey. I wasn't coding Java ten years ago, and I don't expect to a decade after now.

    First: I take exception to your categorization of programmers as "code monkeys". Good Software Developers have a very holistic view of the entire process, have solid technical skills, and know how to work with others to define and implement requirements.

    Code monkeys are those who bang on the keyboard without great understanding of what they are doing.

    I happen to know projects where management, in their infinite wisdom, decides to spend $10000K/person on a wiz-bang code generator/CASE/rule based system for spewing out "Java apps" in record time. So they got 10-20 "analysts" that knew very little about programming to use this tool (a well-known and well respected tool) to whip up the app. The end result?

    They weren't able to ship the product after 2 years of development! In that amount of time, and that amount of money I could have hired C (maybe even assembly programmers) and whipped up that system in much faster time and it would actually work!

    So who's the monkey? The one that understand what's going on? Or the one that relies on tools to do the hard work?

    (Would you want a script kiddie developing your enterprise apps?)
    ...creating this rigid of division between the "solve the business concerns" and the "solve the technical concerns" roles is a recipe for disaster...
    The universal existance of a corporate promotional ladder and chain of command suggests that your fear is ungrounded. Specialization is the norm, not a risky experiment.

    Ok.. tell that to the DoD. That is the basic premise of the waterfall model.. the people at the bottom of the waterfall are the "code monkeys" you are so fond of.

    If MDA is going to work it has to be more than just a rehashing of the CASE thesis.
  87. MDA roles.[ Go to top ]

    The next to be dislodged from prominence might be the Java code monkey. I wasn't coding Java ten years ago, and I don't expect to a decade after now.
    First: I take exception to your categorization of programmers as "code monkeys". Good Software Developers have a very holistic view of the entire process, have solid technical skills, and know how to work with others to define and implement requirements.Code monkeys are those who bang on the keyboard without great understanding of what they are doing.I happen to know projects where management, in their infinite wisdom, decides to spend $10000K/person on a wiz-bang code generator/CASE/rule based system for spewing out "Java apps" in record time. So they got 10-20 "analysts" that knew very little about programming to use this tool (a well-known and well respected tool) to whip up the app. The end result?They weren't able to ship the product after 2 years of development! In that amount of time, and that amount of money I could have hired C (maybe even assembly programmers) and whipped up that system in much faster time and it would actually work!So who's the monkey? The one that understand what's going on? Or the one that relies on tools to do the hard work?(Would you want a script kiddie developing your enterprise apps?)
    ...creating this rigid of division between the "solve the business concerns" and the "solve the technical concerns" roles is a recipe for disaster...
    The universal existance of a corporate promotional ladder and chain of command suggests that your fear is ungrounded. Specialization is the norm, not a risky experiment.
    Ok.. tell that to the DoD. That is the basic premise of the waterfall model.. the people at the bottom of the waterfall are the "code monkeys" you are so fond of.If MDA is going to work it has to be more than just a rehashing of the CASE thesis.

    +1

    There are those who roll up their sleeves and get on and do stuff. And there are those who succumb to the charms of the "Snake Oil" seller. No wonder most IT departments have very little credibility left. MDA will come and go just like CASE. The downside is that business will be even more suspicious and untrusting of IT then they already are today.
  88. MDA roles.[ Go to top ]

    ...creating this rigid of division between the "solve the business concerns" and the "solve the technical concerns" roles is a recipe for disaster...
    The universal existance of a corporate promotional ladder and chain of command suggests that your fear is ungrounded. Specialization is the norm, not a risky experiment.

    I hope you are simply grossly misinterpretting my comments. I didn't say specialization is bad. I said you always need someone who understands the big picture to be involved in the project. The need for this person is universal. The need for various specialities is project dependent.
    ...you can never remove the person with a holistic understanding without putting a project at serious risk.
    That's bullshit. The history of software engineering has been mostly about increasing abstraction and automation. A consequence of this has been the mainstream extinction of many glorious developer roles.

    The history of software engineering is littered with miserable failures. Increasing abstraction is essential. But you aren't talking about using MDA to increase abstraction, you're talking about using MDA to increase ignorance. Increasing abstraction *only* works if the abstraction is 100% compatible with your problem domain. The only way to achieve that is for people on the project to have complete control over the abstraction, and even then it requires brilliant people.

    There are lots of developers and analysts out there who are perfectly competent to solve complicated problems for a specific case. Framing and solve problems in the general case is much, much more difficult. Using a highly (or purely) generative approach to software development forces the development team to either (1) do a lot of work that the customer doesn't want to pay for in order to adequately generalize the problem, or (2) build tons of one-off generators, which is much more complicated that just intelligently writing the code.

    Let me approach it from another angle. In order for MDA to work, the problem being solved needs to fit in the box defined by the MDA package. If the problem fits so neatly into a box, it probably isn't worth my time to solve, because it's not going to offer any competitive advantage to my company.

    Or are you one of those people who think IT can never revolutionize business again?
    Then the C/++ grunts were relegated to firmware and other low level shit.

    Next time you boot up your computer, I want you to think about all the poor "shit" that had to be written so that you can work at your precious high level of abstraction. We don't all need to hack assembly and C/C++, but none of the world's best and brightest want to work on it, technological innovation will come to a screeching halt.
  89. MDA roles.[ Go to top ]

    I said you always need someone who understands the big picture to be involved in the project. The need for this person is universal.
    That's just patently wrong. The JRE eliminated the need for C/++ expertise in most enterprise teams, even though much/most of the JRE is coded in C/C++. The JRE is an example of mainstream expertise eliminiation. Is there an emotional reason why you argue against progress, especially automation and role elimination?
    But you aren't talking about using MDA to increase abstraction...
    Oh yes I am. MDA/UML-2 is very abstract, especially PIMs. An advantage of MDA is that it introduces novel abstractions that are usually unavailable to the handcoding monkeys. I gave a compelling example regarding concurrency earlier in this thread.
    Next time you boot up your computer, I want you to think about all the poor "shit" that had to be written so that you can work at your precious high level of abstraction.
    You're being luddite. Hiding complexity is one of the goals of technological progress. Software is most successful when it runs and we needn't mind it.

    Consider that javac is written in Java and builds itself. An MDA architecture might do same for itself.
  90. MDA roles.[ Go to top ]

    That's just patently wrong. The JRE eliminated the need for C/++ expertise in most enterprise teams, even though much/most of the JRE is coded in C/C++. The JRE is an example of mainstream expertise eliminiation.

    Ok, either you are intentionally misinterpretting me or I'm being really unclear. When I say "big picture," I mean someone who understands:
    1. The problem domain, to the point where the customer would trust him to re-engineer their process(es)
    2. The technical concerns: programming, databases, networks, operating systems, etc
    3. The business concerns, such as "how will a change in project cost and/or schedule impact my customer's business?" so that he can competently trade business drivers against technical drivers.

    This person doesn't need to be a guru in everything. As I said before, he can lead specialists.
    Is there an emotional reason why you argue against progress, especially automation and role elimination?

    None against role elimination. Although I've experienced enough salespeople/consultants who have tried to sell non-technical managers products that "eliminate" the need for programmers to make it raise my hairs.

    The fundamental problem with every technology I've seen that claims to eliminate the need for expertise is that it makes one set of tasks "easy," and anything outside those tasks difficult or nearly impossible.
    An advantage of MDA is that it introduces novel abstractions that are usually unavailable to the handcoding monkeys.

    Why do novel abstractions have to be part of a UML model? Why can't the programmer introduce his own abstractions that are better tailored to the problem he's trying to solve?

    You're attempting to eliminate one barrier by building a bigger one.

    I say eliminate complexity at it's core. I say enable the programmer to effectively cut complexity from his own code, not just use prefabbed components to sweep it under the rug.
    You're being luddite. Hiding complexity is one of the goals of technological progress. Software is most successful when it runs and we needn't mind it.

    No, you're acting like a child. You called the creation of the technological foundations that we all depend on sh*t work. While you extol the value of hiding complexity, you bash the very people who hide it from you.
  91. MDA roles.[ Go to top ]

    A big benefit of this specialization is that for many development teams MDA can eliminate the role of resident architect. MDA presumes that architectures (templates and runtime libraries) are pluggable and can be bought shrinkwrapped and ready to use. Ie, a development team might make do without an architect. The team's developers can all be analysts focused on gathering domain knowledge and modeling it.

    Hi Brain,

    In my haste to respond to your post I think I missed the point you where making. Not only does the Business Analyst not need technical skills, but on some MDA projects all technical expertise can be "bought-in" in the form of "templates and run-time libraries" (I think you mean a code generator). Freeing the development team to do high level analysis and modelling.

    Do you mean this? Is this what Mr Mellor is touting knowadays? Well if true then MDAs time in the sun will be even shorter than CASE.

    BTW What will your non-technical MDA development team do when their code generator throws an exception and they are left staring at a stack trace?

    I don't suppose the business will be able to do much with the Model, without working code. Then again they could use it as scrap paper :^)
  92. MDA roles.[ Go to top ]

    Not only does the Business Analyst not need technical skills...
    Whoa, no I never said nor thought that. When I was a business analyst preparing my static and dynamic diagrams for compilation, I always felt a technical developer. UML 2 is a very technical specification language. UML 2's processing semantics (eg, activity diagrams or action semantics) allow data flow and control flow, including parallelism and asyncrony.

    Actually I felt I was a more productive developer using model translation than without. Eg, C++ code monkeys worry about memory corruption. I never worried, even though the templates emitted C++. I trusted the templates to eliminate many categories of C++ accidents: the stack corruptions, the heap corruptions, array index out of bounds, and the reading of uninitialized memory. Admittedly this was a bigger deal with unsafe C++ than as with it would be with mostly safe Java today.

    But array index out of bounds is only detectable at runtime in Java. Whereas a hardened MDA architecture (templates and runtime library) is guaranteed to be free of this exception. Java can't statically guarantee it. Model translation can. So Java applications built with MDA get it guaranteed. I was also guaranteed fewer null pointer crashes, if the architecture is hardened. So many ways to guarantee quality with a reusable architecture.
    ... "templates and run-time libraries" (I think you mean a code generator).
    The code generator is only half of an MDA architecture. The other half is the reusable runtime platform including the operating system, the bytecode virtual machine, and the generic part of the deliverible codebase (ie, the architecture's own proprietary libraries and supporting third-party libraries).
    Freeing the development team to do high level analysis and modelling. Do you mean this?
    Yes. OMG's MDA Guide said it best:
    "There are contexts in which a PIM can provide all the information needed for
    implementation, and there is no need to add marks or use data from additional profiles, in order to be able to generate code. One such is that of mature component-based development, where middleware provides a full set of services, and where the necessary architectural decisions are made once for a number of projects, all building similar systems (for example, there is a component based product line architecture in place). These decisions are implemented in tools, development processes, templates, program libraries, and code generators."
    Of special importance in the above is the phrase "the necessary architectural decisions are made once for a number of projects". This means the role of the resident architect is potentially eliminated.
    Is this what Mr Mellor is touting knowadays?
    I don't think I've read any of his post-Shlaer/Mellor MDA stuff. So I don't know which benefits of MDA he personally spotlights. I prefer MDA's quality guarantee and role elimination.
    What will your non-technical MDA development team do when their code generator throws an exception and they are left staring at a stack trace?
    That's no worse than complaints of a Java IDE not being able to build some projects. I've seen stuff like that on eclipse.org's bugzilla. Does that mean OO IDEs all suck?
  93. MDA roles.[ Go to top ]

    What will your non-technical MDA development team do when their code generator throws an exception and they are left staring at a stack trace?
    That's no worse than complaints of a Java IDE not being able to build some projects. I've seen stuff like that on eclipse.org's bugzilla. Does that mean OO IDEs all suck?

    No it means that their chosen IDE sucks. But being technical people they can allways fall back and use ant, or a batch file or even the command line to get the job done. The only option with MDA will b and waiting...

    Brian, just how much hands on development experience do you have? Don't quote OMG specs written by committees with nothing better to do. Rely on hard earned experience. All the replies to your latest posts say exactly the same thing It won’t work, don’t do it

    BTW with CASE at the beginning the vendors said exactly the same thing. Don't modify the generated code; just make changes to the model. After being overwhelmed with support calls, they then decided to add a "feature" where changes to the (generated) code could be "de-compiled" back into the model, hence the birth of "round-trip-engineering". A marketing term of course, the rest of us just called it a bl**dy mess.

    It's painful seeing history repeat itself.
  94. MDA roles.[ Go to top ]

    ...hence the birth of "round-trip-engineering". A marketing term of course, the rest of us just called it a bl**dy mess. It's painful seeing history repeat itself.
    You wrongly assume that round-tripping is a preferred way to apply MDA. Stan Sewall said it best:
    "Roundtrip is not a requirement and should not be used for MDA projects. A seasoned MDA Practioner will never endorse a roundtripping methodology."
    Maybe you aren't informed enough to condemn MDA.
  95. MDA roles.[ Go to top ]

    ...hence the birth of "round-trip-engineering". A marketing term of course, the rest of us just called it a bl**dy mess. It's painful seeing history repeat itself.
    You wrongly assume that round-tripping is a preferred way to apply MDA.

    Again you've missed it. I said the original intention was not to do round-trip, but in the end this was the only way users could get anything done (and the vendors still justify selling you the generator in the first place). It is precisely because the generator team were unable to resolve all the problems with the generator, that they were forced to provide this facility as a back door.
  96. <quote>
    Again you've missed it. I said the original intention was not to do round-trip, but in the end this was the only way users could get anything done (and the vendors still justify selling you the generator in the first place). It is precisely because the generator team were unable to resolve all the problems with the generator, that they were forced to provide this facility as a back door.
    </quote>

    like I already said above: Will you decompile Java CLASS files to fullfil your needs? I hope not :-) I surely won't.

    With MDA you'll be able to change the transformation definitions (the compiler) by yourself. You also can build your own compiler (model compiler == transformation definitions in MDA terms) easier than before (it's not a trivial task to build javac, isn't it?).

    So if you see this from a bird perspective you will see that MDA with its model transformations is just the same like compiler what we know today:
    - javac (compile Java files to Class files)
    - xmlc (compile XML and its derivatives to Java files)
    - aspectj (compile AspectJ language to Java files)
    - mtl (compile BasicMTL language to Java files)
    - AndroMDA cartridges (compile UML models to Java files for presentation, specification and business layers implementations with Java, Struts, JSF, Enhydra, Hibernate, EJB, Spring, Webservice and what so ever)
    - and many other compilers...

    The more mature the compilers are, you will *never ever* go for round-tripping... ;-)

    Cheers,
    Lofi.
  97. Will you decompile Java CLASS files to fullfil your needs?
    Indeed, Paul's bogus assertion is that MDA be accompanied by hacking the generated code and round tripping. But as you note, this is no less absurd nor technically different than a Java developer editing a binary classfile to fix a bug. Whether MDA or Java, always the developer should fix his bug directly in the source -- the original manual artifacts. Round-tripping is a fundamentally flawed praxis, regardless of technology. It's FUD to associate round-tripping with MDA.
  98. Will you decompile Java CLASS files to fullfil your needs?
    Indeed, Paul's bogus assertion is that MDA be accompanied by hacking the generated code and round tripping. But as you note, this is no less absurd nor technically different than a Java developer editing a binary classfile to fix a bug. Whether MDA or Java, always the developer should fix his bug directly in the source -- the original manual artifacts. Round-tripping is a fundamentally flawed praxis, regardless of technology. It's FUD to associate round-tripping with MDA.

    Hi Brain, You've gone on a trip about round-tripping. Wrongly quoting me several times. How about quoting me on alternative development views other than top-down (such as iterative, evolutionary design discovery, bottom-up), or the difference between deterministc mathematically proven transforms (compilers) versus non-deterministic, creative and inteligent transforms (human powered analysis, design, and programming)?

    You seem to be very selective. Why?
  99. <quote>
    What we want to do is write applications and solve business problems. But look through your posts on MDA, their full of technology (TLAs). That should tell you something.
    </quote>

    Like Brian said, this is not true. I just told you how MDA approach can be seen from *inside*.

    <quote>
    The purpose of a programming language is to narrow the gap between humans and computers. IMO MDA attempts to do this by sweeping technical complexity under the carpet.
    </quote>

    Not sweeping the technical complexity under the carpet but SEPARATING it from the business complexity (PIM and PSM) => just an old computer science principle: separation of concerns.

    <quote>
    Again, how we would like the world to be and how it actually is, are often different things. Thats why I say "the simplest thing possible but not simpler". If these two concerns can be decoupled in the way you describe then great, but is that how things really are? Look to nature, many complex natural structures have both a macro and a micro organisation. Often the two are related in complex, yet simple and elegant ways.
    </quote>

    Yes this is true and therefore I see MDA with its transformation definitions as the way to handle this. You decouple the *concerns* and use *transformations* to couple them *automatically* again. The defect in such structure today is that you have to *manually* do transformations (imagine the business domain to technical platform transformations), so you may lose some important information. Surely I'm not saying that writing the transformation definitions will be easy... :-)

    Anyway, this is a good article about model driven development from Microsoft. Yes, MS is also going into this direction, not directly with UML though (they use their own DSL with their own metamodels), but applying all the techniques of model driven development, they call it Software Factories (you'll also see here the definition of economies scale and scope from discussion above):
    http://msdn.microsoft.com/library/default.asp?url=/library/en-us/dnmaj/html/aj3softfac.asp

    A comparison of MDA and Software Factories:
    http://www.iasarchitects.org/iasa/others/austinAdmin/IASANewsletter002-102.pdf

    Cheers,
    Lofi.
  100. Again, how we would like the world to be and how it actually is, are often different things. Thats why I say "the simplest thing possible but not simpler". If these two concerns can be decoupled in the way you describe then great, but is that how things really are? Look to nature, many complex natural structures have both a macro and a micro organisation. Often the two are related in complex, yet simple and elegant ways.
    Yes this is true and therefore I see MDA with its transformation definitions as the way to handle this. You decouple the *concerns* and use *transformations* to couple them *automatically* again. The defect in such structure today is that you have to *manually* do transformations (imagine the business domain to technical platform transformations), so you may lose some important information. Surely I'm not saying that writing the transformation definitions will be easy... :-)

    You haven't adressed my point about "alternative views". The number of times whilst doing TDD I've experienced my code telling me that my analysis is wrong or my design specification (tests) are wrong or incomplete. I would not have "discovered" this top-down. The information is comming bottom-up.

    What I am saying is that your PIM informs your PSM, but your PSM also informs your PIM. They are intimately linked. Years of Software development experience has shown this.

    Thanks for the links.
  101. MDA and zero defects.[ Go to top ]

    The number of times whilst doing TDD I've experienced my code telling me that my analysis is wrong or my design specification (tests) are wrong or incomplete.
    When your hand code malfunctions, it actually tells you very little. It certainly doesn't tell you whether the malfunction comes from an analysis defect, a design defect, or a coding defect. Whereas with MDA a commodity architecture is battle tested, and the analyst can usually assume the malfunction is an analysis defect. Certainly with MDA and a hardened architecture, broad categories of defects are eliminated completely. Eg, model compilation should never result in a codebase that won't build.
  102. MDA and zero defects.[ Go to top ]

    The number of times whilst doing TDD I've experienced my code telling me that my analysis is wrong or my design specification (tests) are wrong or incomplete.
    When your hand code malfunctions, it actually tells you very little. It certainly doesn't tell you whether the malfunction comes from an analysis defect, a design defect, or a coding defect. Whereas with MDA a commodity architecture is battle tested, and the analyst can usually assume the malfunction is an analysis defect. Certainly with MDA and a hardened architecture, broad categories of defects are eliminated completely. Eg, model compilation should never result in a codebase that won't build.

    We are talking cross purposes. My code speaks to me. Code smells jump out and tell me that something is wrong. Often I can trace the problem back to Analysis or high level design:

    http://xp.c2.com/CodeSmell.html

    I fix the problem there and then. Fix my tests (design specification) if needed and perhaps think of new questions to ask my customer. The new information is fed directly into my model bottom-up.

    Saying that you have Zero Defects with MDA is ludicrous. You will be injecting defects all the time. You will not detect those defects untill you execute and test your model (your generator isn't going to remove them for you).

    Once your PIM is working (executing correctly against your stated design specification). You can still have "defects" in the sense that your model is much more complex (Smelly) then needs to be. This will be OK until you find a requirement that your generator cannot deal with. Then you will need to deal with all that unneeded complexity yourself (either by hand coding, or by improving your generator).

    Alternatively your design could be flawed, through a misunderstanding. You may "get it working" by adding complexity, but the opportunity to discover a "cleaner" design through a "better" understanding of the problem may be lost. The design smell may be clearly identifiable from the bottom up, but of course you have no interest in generated code.

    I have not done MCA, but I have used CASE tools and code generators in the past, and they allways lead to the same thing. Hand modifying nasty complex generated code. Hardly a recipe for zero defects.
  103. MDA and zero defects.[ Go to top ]

    Saying that you have Zero Defects with MDA is ludicrous.
    Ignoring this post's title, I never actually claimed MDA projects are defect free. I said:
    ...a reusable MDA architecture will approach zero defects faster than the development of traditional libraries and frameworks would. ...broad categories of defects are eliminated completely. ...the analyst can usually assume the malfunction is an analysis defect.
    Think of the implications to quality and time to market. Model translation can make a difference.
  104. <quote>
    You haven't adressed my point about "alternative views". The number of times whilst doing TDD I've experienced my code telling me that my analysis is wrong or my design specification (tests) are wrong or incomplete. I would not have "discovered" this top-down. The information is comming bottom-up.

    What I am saying is that your PIM informs your PSM, but your PSM also informs your PIM. They are intimately linked. Years of Software development experience has shown this.
    </quote>

    you are missing the point. Please distinguish between: DISCOVERING the mistake which can be bottom-up, innen-out, whatsoever and HOW and WHERE you will CORRECT the mistake. It's correct that during coding (in Java for example) you will see that the analysis might be wrong. The point is how and where you will you correct this mistake? The answer: ALWAYS do TOP-DOWN, you will change your analysis model (because here is your defect!).

    Analog to this: If you find some mistakes in your resulted application, will you decompile your Java CLASS files and change all the codes inside these CLASS files. Nope. You change/rewrite your Java code and recompile. This is just the same with MDA.

    Cheers,
    Lofi.
  105. <quote>You haven't adressed my point about "alternative views". The number of times whilst doing TDD I've experienced my code telling me that my analysis is wrong or my design specification (tests) are wrong or incomplete. I would not have "discovered" this top-down. The information is comming bottom-up.What I am saying is that your PIM informs your PSM, but your PSM also informs your PIM. They are intimately linked. Years of Software development experience has shown this.</quote>you are missing the point. Please distinguish between: DISCOVERING the mistake which can be bottom-up, innen-out, whatsoever and HOW and WHERE you will CORRECT the mistake. It's correct that during coding (in Java for example) you will see that the analysis might be wrong. The point is how and where you will you correct this mistake? The answer: ALWAYS do TOP-DOWN, you will change your analysis model (because here is your defect!).Analog to this: If you find some mistakes in your resulted application, will you decompile your Java CLASS files and change all the codes inside these CLASS files. Nope. You change/rewrite your Java code and recompile. This is just the same with MDA.Cheers,Lofi.

    Hi Lofti,

    At least you're addressing the issues (unlike Brian). The bit that both brian and yourself are missing is that code generation is not like program compilation. A compiler makes deterministic transforms in a well understood and defined way. Analysis and Design are creative activities, they are not deterministic. Now if your PIM contains less information then your program then it will be down to your generator to 'create' it. Short of artificial intelligence then all your generator can do is map patterns and make inferences, but it cannot be creative. So when you want to do something that your generator builders didn't anticipate and cannot infer then you need to drop back to a lower level of abstraction. If you can do this in your PIM using say UML2.0 then you aren't modelling you are programming, and there are a lot more better programming languages out there then UML2.0.

    On the point of where you should make corrections. I make the correction where I detect the smell. That fix then leads me to the next, and the next and bit by bit the macro structure of my program changes, "discovered" one step at a time by making small micro changes. This IS NOT top down.

    Top-down development is programming by intent. I know what the design should be, hence I put the top level structure in place in anticipation. This is the traditional waterfall approach. The problem with this approach is that your intent could be flawed. That is why we no longer treat programmers as "code monkeys" since a programmer could detect a flaw in the thinking of the analyst. Infact it is easier for the programmer to detect flaws because unlike the analysis model, the program model must work.

    Programming is complex, with many layers of abstraction. The lowest human layer (before compilers, VMs, operating systems take over) is the layer at which computers can make mathematically proven deterministic transforms. I will call this level the programmer level. The programmer needs to be comfortable at all levels of abstraction above this, moving up and down the ladder of abstraction as needed. The ability to do this requires inteligence, creativity and skill. The ability to do this in a "hardened" reliable way cannot be automated.
  106. <quote>
    A compiler makes deterministic transforms in a well understood and defined way. Analysis and Design are creative activities, they are not deterministic. Now if your PIM contains less information then your program then it will be down to your generator to 'create' it. Short of artificial intelligence then all your generator can do is map patterns and make inferences, but it cannot be creative.
    So when you want to do something that your generator builders didn't anticipate and cannot infer then you need to drop back to a lower level of abstraction.

    If you can do this in your PIM using say UML2.0 then you aren't modelling you are programming, and there are a lot more better programming languages out there then UML2.0.
    </quote>

    ... and this is the difference between T-MDA and E-MDA ;-) T-MDA (Translationist) is using Action Semantics to make you doing "programming" with UML (the second part of your comment).

    Whereas E-MDA (Elaborationist) is the first part of your comment you mentioned: "So when you want to do something that your generator builders didn't anticipate and cannot infer then you need to drop back to a lower level of abstraction."

    -> Yep, using programming language like Java, C++, Assembler, etc. E-MDA is at the moment the *pragmatic way* working with MDA, because you base your work on the lower level which is indeed already mature...

    <quote>
    Programming is complex, with many layers of abstraction. The lowest human layer (before compilers, VMs, operating systems take over) is the layer at which computers can make mathematically proven deterministic transforms. I will call this level the programmer level. The programmer needs to be comfortable at all levels of abstraction above this, moving up and down the ladder of abstraction as needed. The ability to do this requires inteligence, creativity and skill. The ability to do this in a "hardened" reliable way cannot be automated.
    </quote>

    Yes, agree, but as I said above we always search for a better solution and try to higher this abstraction level... and MDA is one solution for this.

    Cheers,
    Lofi.
  107. The ability to do this requires inteligence, creativity and skill. The ability to do this in a "hardened" reliable way cannot be automated.</quote>Yes, agree, but as I said above we always search for a better solution and try to higher this abstraction level... and MDA is one solution for this.Cheers,Lofi.

    OK at last. I also agree with the goal of higher abstraction. We need to be honest, MDA is an experiment. There is nothing wrong with experimentation and research, I think it is a good thing. Experiments in this particular area have been over sold in the past and failed miserably. I think it is very important that MDA doesn't make the same mistake. It will be interesting to see how MDA develops in the future.
  108. There is nothing wrong with experimentation and research, I think it is a good thing. Experiments in this particular area have been over sold in the past and failed miserably. I think it is very important that MDA doesn't make the same mistake. It will be interesting to see how MDA develops in the future.

    Reflecting on my statement above, I want to make it clear. I do not think that MDA is a promising area of research (as you can tell from my previous posts), but it is an interesting one.

    The reason why exaggerated claims for tools/methods like MDA infuriate me so is because there are a lot of unscrupulous vendors out there.

    I'm a freelance contractor, and I really do have to pick and choose my projects. Most projects make poor tool and method decisions prior to hiring developers who can provide them with independent advice. They do this on the say so of vendors whose sole interest is to milk them for as much money as they can get.
  109. Hi Lofi,

    After spending sometime saying why I think MDA won't work. Let me spend a little time describing what I think will.


    Here is a link to a video presentation of a 3D collaborative environment known as croquet.


    Croquet is built using a few simple principles:

    1) A late bound architecture
    2) A pure object metaphor with deep meta-programming capability
    3) A VM which is bit identical across multi-platforms, defined through a working executable specification (Squeak) rather than a paper spec (Java).

    Listen to what Alan Kay has to say. One thing that he says that is very profound, is that it is the humble people that came up with the most workable ideas (some of which are over 30 years old). The latest bang-whiz complex ideas may partially work, but often lack integrity (do not work under all circumstances). A good example of a humble idea is the world wide web, simplicity and integrity at its core. It has out lived a number of more complex bang-whiz ideas like DCE, CORBA etc.

    Late bound architectures actually reduce the compile time transformations performed by the computer (the opposite to MDA), replacing static translation with run-time available meta-data. In so doing they provide the programmer with more power, allowing him to use his creativity to create abstractions (objects) whose behaviour and iteractions can be manipulated at run-time in new, exciting and interesting ways (e.g. through scripting and/or direct end-user manipulation).

    I don't think that it is any surprise that stuff like Seaside and Croquet are comming out of the Smalltalk community. Don't marginalise the programmer (even code monkeys are more inteligient then generators), Empower him/her :^).
  110. Hi Paul,

    <quote>
    Croquet is built using a few simple principles:
    </quote>

    thanks for the link! I'm aware of Croquet since I read an article about it just in August this year ;-)
    Yes, I agree, the idea is very refreshing, especially after working with Java and its APIs...

    Anyway I'm very pragmatic, use languages, tools which make you powerful, productive and fast ;-)

    Cheers,
    Lofi.
  111. MDA = Scrum + Kanban[ Go to top ]

    Brian:
    Did you study any manufacturing in school, like mass production and economy of scale? ... I was taught kanban, a 25 year-old Japanese way.
    Paul:
    I would suggest that you take a look at methodologies like SCRUM or XP as they clearly describe how ludicrous this [MDA] mindset is.
    Are there any PhDs here? Here's a paper by the PhD who invented Scrum:
    The Roots Of Scrum: How japanese lean manufacturing changed global software development practices.
    Is your architecture driven?
  112. MDA = Scrum + Kanban[ Go to top ]

    Brian:
    Did you study any manufacturing in school, like mass production and economy of scale? ... I was taught kanban, a 25 year-old Japanese way.
    Paul:
    I would suggest that you take a look at methodologies like SCRUM or XP as they clearly describe how ludicrous this [MDA] mindset is.
    Are there any PhDs here? Here's a paper by the PhD who invented Scrum:
    The Roots Of Scrum: How japanese lean manufacturing changed global software development practices.
    Is your architecture driven?
    Hi Brian,

    I respect your enthusiam for MDA, but I resent your attempts to some how link MDA to Agile principles. At the core of SCRUM is the concept of Emperical Process Control. Non-deterministic processes such as SCRUM come into their own when the complexity of the problem is such that outcomes can no longer be predicted using a open loop defined process. To understand the impact of complexity on process selection take a look at the Cynefin Sense Making framework.

    http://www.cynefin.net/

    So the bottom line is that Software development is inherently complex. It is non-deterministic, and can only be achieved successfully by using a probe/act-sense-respond approach. What does this have to do with MDA?

    I would argue that MDA promotes the exact opposite to SCRUM. MDA seems to promote the idea that software is some how simple, and that software development is a deterministic process. MDA seems to take the deterministic waterfall approach further, by suggesting that much of the development process can be automated, using pre-packaged knowledge (templates) and applying it top-down.

    The last 30 years has shown that Software development is complex and requires a great deal of skill and inginuity. MDA will not remove the need for creative skill and in many ways will only hinder it (Others have expressed the reason for this in several posts that you choose to ignore).

    BTW. Please declare your interest in this. Do you work for an MDA vendor?

    Please stop spreading this dis-information.

    Paul.
  113. MDA = Scrum + Kanban[ Go to top ]

    ...Software development ... is non-deterministic, and can only be achieved successfully by using a probe/act-sense-respond approach. What does this have to do with MDA?
    I suppose there are two points to consider. The first and most basic point is that application development still has an edit/debug cycle, even if MDA is used.

    The second and more important point is that MDA's edit debug cycle is more agile than the same cycle for hand coding. The reasons for this are threefold:

    a) Model compilation eliminates many categories of coding errors the commonly occur by hand. Eg, model compilation should never result in a project that won't build. I discussed this earlier. We can revisit if you like.

    b) When an application's design evolves, there's an expanding "cone of destruction" of invalidated codebase, since an application's design may be projected into various places in the codebase. Ie, when a codebase evolves, its fragility causes lost work. Since MDA is more abstract than hand coding, MDA usually has less hand rework during evolution.

    c) An executable model can be debugged in simulation prior to building. Catching bugs earlier in the production pipeline is more agile than catching them at runtime. The greater the metamodel, the more can be validated statically or in isolation. And MDA's metamodel is unrivaled.
    MDA seems to take the deterministic waterfall approach further...
    That's not so. Generally the physical tooling's paradigm and the team's logical process paradigm are decoupled, regardless of whether the tooling is for handcoding or CASE. Despite your suspicion, nothing about MDA inherently imposes waterfall. When we were doing model compilation, we moved freely back and forth between analytical modeling and testing. The fact that we had push-button code generation allowed for things that are decidedly not waterfall. We achieved Scrum's steel thread before business analysis was anywhere near complete. That's positively iterative.
    MDA will not remove the need for creative skill and in many ways will only hinder it...
    If MDA decreases the people ratio of handcoders to analytical modelers and lets more of these folk focus on business requirements, then how has this "removed the need for creative skill"? Both handcoding and analytical modeling are acts of description. Ample creativity is inherent in both.
    Do you work for an MDA vendor?
    No, I write stock trading software. In the interest of full disclosure, do you consider MDA a rival (no matter how feeble) to your craft?
  114. MDA = Scrum + Kanban[ Go to top ]

    In the interest of full disclosure, do you consider MDA a rival (no matter how feeble) to your craft?

    NO, I consider it an hinderence at best and a massive distraction at worst.

    The industry has spent 10 years with Java even though we knew better at the outset. Another 10 years spent down the blind alley of MDA is just not worth thinking about.

    My concern is that the history of the software development has shown that people are always looking for easy answers. Well there is no easy answers. Software development is hard, period. Writing code is the least of your problems. Finding good optimal abstractions is difficult whether you represent them in UML or code, and in my experience ill concieved 'middleware' abstractions like EJB's just tend to get in the way.

    The silver bullet doesn't exist. Developers just need to get better at programming computers. As developers begin to grasp this fundamental fact, they are beginning to adopt languages that give them the greatest degree of flexibility and expressive power (such as Ruby, Smalltalk and yes even Common LISP).

    P.
  115. MDA = Scrum + Kanban[ Go to top ]

    That's not so. Generally the physical tooling's paradigm and the team's logical process paradigm are decoupled, regardless of whether the tooling is for handcoding or CASE. Despite your suspicion, nothing about MDA inherently imposes waterfall. When we were doing model compilation, we moved freely back and forth between analytical modeling and testing. The fact that we had push-button code generation allowed for things that are decidedly not waterfall. We achieved Scrum's steel thread before business analysis was anywhere near complete. That's positively iterative.

    I agree the MDA doesn't force a waterfall development model in the traditional sense, and even encourages more iterative development.

    However, I think it almost eliminates the primary benefit of a iterative approach: Lowered up-front commitment.

    You're probably about to scream: No way! You're completely ignorant about MDA!

    And maybe I am ignorant about MDA, but I'm very familiar with the pitfalls of large, prepackaged components.

    You're argument basically boils down to MDA improves productivity and/or quality by giving the developer bigger building blocks for creating systems, thereby reducing/eliminating the need for the developer to be concerned with the technical details and focused on solving the business problem.

    The problem is a whole bunch of pre-fabbed architectural decisions are embedded in those larger building blocks. What happens when they don't fit the requirements?

    You need to find someone who can make blocks.

    The problem is, it's a lot harder to find someone who can make blocks than it is to find someone who can work with smaller blocks.

    It's the same problem as deploying a COTS "solution" for major business processes. If it works, it's great. But when the business process can't be made to match the "solution," it quickly becomes an absolute disaster.

    I have yet to see a match.

    So I contend that mismatch between off-the-shelf MDA components will create more cost than their existence will save.

    Dynamic languages are better because they make it easier to build and adapt abstractions. Well-built abstractions will be naturally easy to use, and won't need fancy tools to facilitate their use.
  116. MDA = Scrum + Kanban[ Go to top ]

    Hi Brian,

    Up late with a bit of time on my hands, so I thought I dispell some more of the myths you've been spreading. BTW, I must thank Erik for his post, as always he is spot on (lightening my workload), and it is good to see him warming to dynamic languages.
    ...Software development ... is non-deterministic, and can only be achieved successfully by using a probe/act-sense-respond approach. What does this have to do with MDA?
    I suppose there are two points to consider. The first and most basic point is that application development still has an edit/debug cycle, even if MDA is used.The second and more important point is that MDA's edit debug cycle is more agile than the same cycle for hand coding. The reasons for this are threefold:a) Model compilation eliminates many categories of coding errors the commonly occur by hand. Eg, model compilation should never result in a project that won't build.I discussed this earlier. We can revisit if you like
    Ok. Lets get to work. Eclipse compiles my model all the time, highlighting compilation errors while I type. So my model compilation time is Zero. What compilation tool do you use? How long does your model take to compile? The code is the model.
    .b) When an application's design evolves, there's an expanding "cone of destruction" of invalidated codebase, since an application's design may be projected into various places in the codebase. Ie, when a codebase evolves, its fragility causes lost work. Since MDA is more abstract than hand coding, MDA usually has less hand rework during evolution.
    OK. In Eclipse I evolve my design all the time. Fortunately my IDE helps, they call it refactoring (been quite popular for a few years now ever since Martin Fowler wrote his book). So I change my design and my IDE "projects" that change all the way through my code base. I can preview the change also and my IDE will tell me if it will break something. So no "cone of distruction". So my re-work after a refactor is again.. Zero. How much time do you spend "projecting" changes through your MDA model?
    c) An executable model can be debugged in simulation prior to building. Catching bugs earlier in the production pipeline is more agile than catching them at runtime. The greater the metamodel, the more can be validated statically or in isolation. And MDA's metamodel is unrivaled.
    OK the final myth. This one is too easy. I write tests before I create my model. It's called Test Driven Development (this has been around for a while too, Introduced by a guy called Kent Beck). So rather then catching bugs, my tests ensure that I don't create them in the first place. I run all my tests all the time. No simulation, I execute chunks of the same model I will use in production. So time spent catching bugs is Zero most of the time. Obviously, sometimes my tests are inadequate so some bugs do get through.

    There is an important point to all this. For Agile methods to work and be effective, the cost of the test-implement-refactor cycle (I don't do edit/code/test anymore :^)) must be low. Even with Java, when using an IDE like Eclipse this is possible. With a dynamic language like Smalltalk, the test-implement-refactor cycle is even faster. Your points demonstrate that MDA doesn't even come close.

    I persist in dispelling your myths as I know that there are developers out there that could be taken in.

    Brian do yourself a favour and get educated. You make your point with such vigour yet they're based on what can only be described as willful ignorance.
  117. MDA = Scrum + Kanban[ Go to top ]

    I write tests before I create my model.
    TDD and MDA surely work well together. A benefit of MDA here is that test failures will almost always be analysis bugs, and maybe due to vague requirements. Less time would be spent chasing failures due to faultily handwritten boilerplate, glue code, lifecycle state machinery, object model navigation, so much repetitive and predictable stuff. So with MDA the bugs are real bugs against the model. The business bugs are the ones best fixed first. MDA exposes them soonest. That's maximally RAD.

    Don't assume that a clunky generator like Andro is necessarily representative of how responsive an incremental IDE could be for MDA. Feel free to imagine an Eclipse plugin for incremental model compilation, debugging, refactoring, etc. Ideally MDA needs no code generation at all. An MDA virtual machine could interpret a model.

    I suppose MDA could eventually win a debugger tooling war against a traditional J2EE plugin for Eclipse. Imagine debugging by single step animating various UML behavioral diagrams. Human strain would be reduced, which makes it agile. And analysis defects are guaranteed by MDA to be the most common case for debugging, so there's more emphasis on the many problems with a business's semantics. I'd dig such immersive and compelling debugging.
  118. The future is in the past[ Go to top ]

    I write tests before I create my model.
    TDD and MDA surely work well together. A benefit of MDA here is that test failures will almost always be analysis bugs, and maybe due to vague requirements. Less time would be spent chasing failures due to faultily handwritten boilerplate, glue code, lifecycle state machinery, object model navigation, so much repetitive and predictable stuff. So with MDA the bugs are real bugs against the model. The business bugs are the ones best fixed first. MDA exposes them soonest. That's maximally RAD.Don't assume that a clunky generator like Andro is necessarily representative of how responsive an incremental IDE could be for MDA. Feel free to imagine an Eclipse plugin for incremental model compilation, debugging, refactoring, etc. Ideally MDA needs no code generation at all. An MDA virtual machine could interpret a model.I suppose MDA could eventually win a debugger tooling war against a traditional J2EE plugin for Eclipse. Imagine debugging by single step animating various UML behavioral diagrams. Human strain would be reduced, which makes it agile. And analysis defects are guaranteed by MDA to be the most common case for debugging, so there's more emphasis on the many problems with a business's semantics. I'd dig such immersive and compelling debugging.
    Hi Brian,

    I do believe that you are genuinely enthusiastic about MDA. One day with further research your enthusiasm may be justified. But today, given the alternatives, MDA is not the best choice for Agile Development. I can tell that your experience of "coding" has been tainted, by low level static languages like "C" (Java is a bit better, but not much). This isn't the only type of programming language out there. Once you get over the desire to optimise your language for performance and decide to focus on cognitive issues then various higher level programming paradigms come into play. The best example I know of this is Smalltalk. Back in 93 when I first looked at Smalltalk, it changed my view of programming entirely. In fact much of the Agile Movement is based on doing things "the Smalltalk way".

    So any new graphical based programming language would have to compete with a language like Smalltalk, and that's a tall order. The other issue is the ability to traverse the ladder of abstraction. Erik pointed out quite eloquently the need to avoid barriers when moving between different levels of abstraction. If the abstractions available to you (J2EE, MDA templates, frameworks, libraries etc) do not quite fit your problem, then you should have the ability to move seamlessly to a lower level of abstraction where you can define exactly what you need yourself. Smalltalk does this.

    This ability in Smalltalk is mainly inherited from LISP. LISP is known as the language to create programming languages. So you can use LISP to create a domain specific language, optimised to your given problem domain. This is what happens in Croquet where a “networked, 3D virtual world” domain specific language is created in Smalltalk which can be used by developers and end users alike to build higher level abstractions.

    You seem genuinely interested in finding better ways of abstracting. Take the time to explore some of the options I've described. When you do you may come to the same conclusion as I have. The most promising approaches to abstraction have been around for over 30 years, and all we need to do is get on and use them.

    In the early 80's low speed microprocessors and expensive memories precluded the use of dynamic languages on PCs and we have been stuck with low level alternatives ever since ("C"). Well today we have ample processing power and memory, so it's time to get back to the future.

    Paul.
  119. Does performance limit abstraction?[ Go to top ]

    In the early 80's low speed microprocessors and expensive memories precluded the use of dynamic languages on PCs and we have been stuck with low level alternatives ever since ("C"). Well today we have ample processing power and memory, so it's time to get back to the future.
    You seem to be alluding to Moore's Law or something similar. Did the low performance regime of the early 1980s that you mention mean that Smalltalk was inherently unworthy crap? If not so and given speedup akin to Moore's Law, how long before MDA is put in a sweet spot? Is the today's low penetration of MDA truly due to performance limitations, lack of agile tooling, or sheer lack of awareness?
  120. Does performance limit abstraction?[ Go to top ]

    In the early 80's low speed microprocessors and expensive memories precluded the use of dynamic languages on PCs and we have been stuck with low level alternatives ever since ("C"). Well today we have ample processing power and memory, so it's time to get back to the future.
    You seem to be alluding to Moore's Law or something similar. Did the low performance regime of the early 1980s that you mention mean that Smalltalk was inherently unworthy crap? If not so and given speedup akin to Moore's Law, how long before MDA is put in a sweet spot? Is the today's low penetration of MDA truly due to performance limitations, lack of agile tooling, or sheer lack of awareness?

    Well I did say...
    One day with further research your enthusiasm may be justified. But today, given the alternatives, MDA is not the best choice for Agile Development.


    Personally, I think MDA is a dead end. The only thing it brings to the party is a change in representation (textual -> graphical) which in itself isn't always beneficial (IMO). Conceptually and cognitively MDA adds nothing, and in fact leaves a lot out (block closures, duck typing, continuations, prototypes, late binding, etc).

    If in time I'm proved wrong then so much the better. But today IMO there are alternatives that provide a lot more promise.

    Paul.
  121. Higher level programming[ Go to top ]

    Imagine debugging by single step animating various UML behavioral diagrams. Human strain would be reduced, which makes it agile. And analysis defects are guaranteed by MDA to be the most common case for debugging, so there's more emphasis on the many problems with a business's semantics. I'd dig such immersive and compelling debugging.

    I do not need to imagine Brian. I can do this today using Smalltalk. Think about it. A textual semantically correct description of the business problem, that can be executed and debugged. Development tools that operate with (business) objects allowing you to inspect them whilst they run, get notified of exceptions, then modify the textual description (code) on the fly and continue execution.

    When you use a language where the lowest level thing is an object ( not a primitive, pointer or any such crude machine oriented construct), and with late binding (intelligent intropsective runtime, not "bare metal") then all this is possible.

    In such an environment, a textual representation isn't a "strain", it's a positive benefit.

    Paul.
  122. Brian:
    Did you study any manufacturing in school, like mass production and economy of scale? ... I was taught kanban, a 25 year-old Japanese way.
    Paul:
    I would suggest that you take a look at methodologies like SCRUM or XP as they clearly describe how ludicrous this [MDA] mindset is.
    Are there any PhDs here? Here's a paper by the PhD who invented Scrum:
    The Roots Of Scrum: How japanese lean manufacturing changed global software development practices.
    Is your architecture driven?
    Jon Tirsen yesterday blogged about a bomb recently dropped by a Mr. Poppendiek:
    "Lean/Agile is at it’s foundation, the fourth industrial paradigm, the first being Craft Production, Factory Production with machine tooling, Automation and Taylorism. These come along every hundred years or so and take a few decades to work through. Each paradigm includes the preceding one and makes it dramatically more productive.

    There is no need to sell agile except to organizations that want to survive long term. If they don’t see the threat/opportunity they cannot succeed with agile or lean nor can they sustain economic viability in the long run.
    "
  123. Hi Brian,

    Read your quote. Do not know what you're trying to say.

    Paul.
  124. Poppendiek is hinting at selective pressures that drive the evolution of software supply, and I'm a wee surprised you missed this. Darwinian evolution is fueled by turnover, which is popularly regarded as "survival of the fittest". In the software scene this is wonderful, for without the continual diminishment and closure of unfit ISVs the market would be flabby and disoriented (as it always was before and during the bubble), and the developer experience sucked. Above, Poppendiek discusses the essence of thriving as an ISV in today's globally competitive and beautifully free market: agility and leaness of process, which of course translates into demand for trick tooling and possibly (as you've been claiming) nimble languages.

    I especially dig that Poppendiek likens software development to manufacturing. The comparison is apt, and it's strange and stupid that economists don't place software development in the manufacturing sector but rather in the service sector. Economists totally miss the infrastructural nature of software. They consider all intellectual creativity as a service rather than goods production, which supposedly renders as servants the authors and architects of movies, buildings, and software. Eg according to economists, the dude who assembles the car works in manufacturing, but the dude who blueprints the factory is a servant and not a manufacturer.

    Sutherland (inventor of Scrum) and Poppendiek, both cited by me in this thread, have sidestepped the absurdity of the economists' classification and plainly described software development as a kind of goods production, akin to manufacturing. I like it when folks tell it like it is, and Poppendiek has gone further by implying that many traditional ISVs are potentially dying dinosaurs, especially due to their outdated process/methodology.
  125. I've read what you say. And I find it very interesting. I haven't read Poppendiek's work and Lean Software Development is one of the Agile methods that I've still yet to explore.

    In general I agree with the broad point that the industry is going through and "evloutionary" point. I subscribe to scrumdevelopment and xp discussion forums (on yahoo groups). There have been threads talikg about this very same point recently.

    The analogy to manufactuirng I'm not sure about as I know very little about manufactoring. But I do agree that we need tools that are more appropriate to the challenges we now face and will face in the future. I describe these challenges as "people centric". Machine centric approaches like "business process re-engineering" have failed because they did do not take into account the nature of people.

    People and organisations are basically chaotic, ilogical at times and highly complex. In such an enviromnet the type of apporach that works best is "probe/act-sense-respond" ( Cynefin Theory) or basically "try it and see". Moulding the technology to the people rather than the people to the technology.

    To do this the tools need to be nibble as with such an approach change is the rule rather than the exception. So no "big releases" imposed top down. But the evolution of systems bottom up, by releasing small regular increments and gaining feedback from real users.

    I agree with your emphasis on "analysis" and "modelling". I just believe that graphical tools aren't the most nimble at doing this. I do occasionally produce graphical models but they are always abstract and ephemeral. Once I've drawn them on a white board or on a piece of paper I throw them away. The models that I keep are textual (code).

    The reason why I do this is because concrete feedback is cruical to the approach I describe here. When modelling, concrete feedback means execution, "try it and see". Text interpreters and Virtual Machines are great at giving instant feedback on the correctness of my model.

    So my point has it has been all along, is that I'm all for Agility, I just do not see what it has to do with MDA.

    BTW If you are interested in the approach to modelling that I adhere to, take a look at "Domain Driven Design" by Eric Evans. I try to use an "ubiquitous domain language" in my code so that my code is my model (no other models needed). No trick tooling, just clever modelling.

    Paul.
  126. The models that I keep are textual (code).

    Acutally I do keep one other model. I keep User Stories (requirements) and User Acceptance Tests(on a wiki). The stories and assoicated tests are also a model (they model the user expectation). A lot of work has been done on making this model executable too, so that the emergent system can be validated at any stage (feedback on how far we have come to meeting the user expectation). If you are interested take a look at the FIT framework by Ward Cunningham.

    So currently, the consensus in the agile world is for two models. The requirements model (stories/tests) and the solution model (unit tests/code), both are executable.

    I would suggest that you go onto the agile forums that I mention as there are people their better qualified than me (including the people you quote) with whom you can explore these ideas.

    Paul.
  127. People and organisations are basically chaotic, ilogical at times and highly complex. In such an enviromnet the type of apporach that works best is "probe/act-sense-respond" ( Cynefin Theory) or basically "try it and see".
    Totally. That's why the highest level of the Capability Maturity Model was dubbed The Optimizing Level and emphasizes learning. Surprise, surprise.
    I do occasionally produce graphical models but they are always abstract and ephemeral. Once I've drawn them on a white board or on a piece of paper I throw them away. The models that I keep are textual (code).
    No one ever asks if the QWERTY keyboard is suboptimal, even though the QWERTY keyboard was invented to slow typists down before electric keyboards. Now keyboards are electric, and still no amount of repetitive motion syndrome ever seems to make folk question the status quo.

    We both know diagrams make intuitive sense. The first thing my coworker did last week when explaining a new EAI constellation to me was he diagramed it. The WSDL-2 spec is fat with UML. The EJB-2 lifecycle spec would be confusing as heck were it not for the accompanying state diagrams. But since the mainstream hasn't been able to fully breath executable life into diagrams, you assume it's a dead end. Fortunately I've had luck at succeeding with compiling executable deliverables from visual models.
    ...I'm all for Agility, I just do not see what it has to do with MDA.
    Please don't trouble yourself with the literature on agile modeling or OMG's paper on agile MDA. You've got dynamic typing in your toolbelt, so what more can you possibly need to thrive. I'm in a rebellious mood this morning, and you likely missed an interesting assertion made by Sam S recently in TSS's "Is Ruby replacing Java" thread. He said:
    Less documentation and complex design to simple problems is typical to c/cc++ --> java converted architects. Java is better off, if it can offload those people to Ruby.
    Care to stew on that awhile?

    Sure the hackers with the niftiest tricks have a tactical advantage. But that's not what the business world is about. Process repeatability is far more important as a sustaining quality for business. Imagine a PotteryBarn store where the staff could blow custom glass. Naively that would seem like a competitive advantage, but it isn't. Neither is JavaScript's ability to specialize instance behavior after instantiation. I'm toying with the possibility that Ruby's DSL capability is fundamentally enabling, but I'm still skeptical. I suppose then I should appreciate your skepticism of MDA.
    If you are interested in the approach to modelling that I adhere to, take a look at "Domain Driven Design" by Eric Evans.
    Will do, and I hope to post feedback. Thanks for the tip.
  128. I'm toying with the possibility that Ruby's DSL capability is fundamentally enabling, but I'm still skeptical. I suppose then I should appreciate your skepticism of MDA.
    Glad to see that you are taking a deaper look at dynamic languages like Ruby. If you get a chance Smalltalk is an eye opener also.
    If you are interested in the approach to modelling that I adhere to, take a look at "Domain Driven Design" by Eric Evans.
    Will do, and I hope to post feedback. Thanks for the tip.
    No problem, I'm sure that people on the Agile forums would be more than happy to debate with you too:

    http://groups.yahoo.com/group/scrumdevelopment
    http://groups.yahoo.com/group/extremeprogramming

    Paul.
  129. If you are interested in the approach to modelling that I adhere to, take a look at "Domain Driven Design" by Eric Evans.
    According to DomainDrivenDesign.org:
    The premise of domain-driven design is two-fold:

    - For most software projects, the primary focus should be on the domain and domain logic; and

    - Complex domain designs should be based on a model.
    That's what MDA's all about, and what I've been spewing here all along. DDD is MDA without the tools. Until MDA's toolchain is ready to impress, DDD seems a great stepping stone.
  130. If you are interested in the approach to modelling that I adhere to, take a look at "Domain Driven Design" by Eric Evans.
    According to DomainDrivenDesign.org:
    The premise of domain-driven design is two-fold: - For most software projects, the primary focus should be on the domain and domain logic; and- Complex domain designs should be based on a model.
    That's what MDA's all about, and what I've been spewing here all along. DDD is MDA without the tools. Until MDA's toolchain is ready to impress, DDD seems a great stepping stone.
    Hi Brian,

    I know what MDA claims or what perhaps is the stated goal of MDA. My problem is with what MDA can practically deliver today. Unfortunately like CASE before it MDA has a credability gap. In order to bridge this gap, MDA would be better served if you talked about what MDA can practically achieve today rather than what it will achieve in the future.

    None of us has a crystal ball, and no one knows what MDA will turn into in the years ahead. Perhaps MDA could become "Dynamic" MDA, perhaps MDA will be dropped like CASE before it, who knows.

    A lot of research has gone on over the last 50 years and the best of it does not point to graphical programming as the way forward. Graphical programming has been flirted with on several occasions in the past and dropped. Not through performance, but because it didn't work.

    Now I can't tell the future either, but I for one will not be betting on MDA.

    I don't want to keep repeating myself. Try your ideas with the people on the Agile forums. I'm sure you'll get the same response.

    Regards,

    Paul.
  131. Modelling, Domains and Programing[ Go to top ]

    Hi Brian,

    A quote from Eric Evans from an article where he describes model driven design and it's relation ship to domain driven design:
    ...To accomplish this, domain experts and software experts have to experiment together to find ways of organizing their knowledge of the domain that serve the purpose of software development. They must plunge into their subject matter, letting it lead them, setting your priorities to build software first and foremost to represent and reason about the domain. They must peal away the superficial aspects even of that domain and dig out the core principles.

    That distillation of knowledge into a clear set of concepts is a modeling process. Those models live in the language of the team, and every conversation in that language can be a modeling session. And so domain-driven design leads inevitably to modeling because modeling is the way we grapple with understanding complex domains.

    As for programming:
    For in model-driven design, the design is not only based on the model, a model is also chosen to suit design and implementation considerations. These two constraints are solved simultaneously through iteration. This reflects, in part, practical limits of our particular hardware or our particular programming languages or infrastructure. This is essential, but something even more fundamental also happens. Programming reveals subtle but important wrinkles in the logic that would never be noticed any other way (at least not cost-effectively). When these are ironed out in the coding of a model-driven design, the programmers refine their ideas about the domain to make the wrinkles fit. Teams who embrace model-driven design are aware that a change to the code is a change to the model. Small anomalies can be clues to big model insights.

    Here is a link to the full article:

    http://domaindrivendesign.org/articles/blog/evans_eric_ddd_and_mdd.html


    I think we have a fundamental difference in our understanding of "what is a model". If you look further into DDD you will begin to understand my view on modelling and how it relates to technology, be it MDA, Java or whatever.

    Paul
  132. MDA vs MDD vs DDD[ Go to top ]

    I think we have a fundamental difference in our understanding of "what is a model".
    Your love of DDD tells me that you regard a model as something diffuse, elusive, and woven indiscriminately and inextricably into the code such that when one examines a line of code, it's far from obvious if it encodes a model detail or is instead an implementation artifact devoid of domain semantics. Whereas with anything that is model driven, be it MDD or MDA, the model is distilled, apparent, and directly navigable and manipulatable. Unlike MDD, with MDA the model is standalone and can be retargeted as is to alternative pluggable architectures. Eg, with Sun's Ace the same domain model can be recast into either a 2 or 3 tier architecture without disturbing the model. DDD can't do this.
  133. MDA vs MDD vs DDD[ Go to top ]

    I think we have a fundamental difference in our understanding of "what is a model".
    Your love of DDD tells me that you regard a model as something diffuse, elusive, and woven indiscriminately and inextricably into the code such that when one examines a line of code, it's far from obvious if it encodes a model detail or is instead an implementation artifact devoid of domain semantics. Whereas with anything that is model driven, be it MDD or MDA, the model is distilled, apparent, and directly navigable and manipulatable. Unlike MDD, with MDA the model is standalone and can be retargeted as is to alternative pluggable architectures. Eg, with Sun's Ace the same domain model can be recast into either a 2 or 3 tier architecture without disturbing the model. DDD can't do this.

    Hi Brian,

    What I'm trying to distinguish is between the model and it's representation. The best definition I've come across for a model defines it as a simplified abstraction of the real world created in order to facilitate a stated purpose. So using this definition an Ordnance Survey Map is a model of some land. A map is a simplification, as it doesn't contain everything on the land it represents. for example you can't count the sheep on a map.

    The purpose of an ordnance survey map originally I believe was to help military tacticians to plan wars, today the most common use is by walkers and ramblers.

    So the thing you model has an independent existence from the model itself. The model is only a view of the thing, and the contents of the model are determined by the purpose to which you want to put your model. On top of this how you choose to represent a given model may vary, but the contents of the model may remain the same. So for example I'm sure the French have an equivalent to "Ordnance Survey" and they probably use their maps for similar purposes, but I'm sure although the content may be the same the representation (icons, legends, colours etc) are probably very different.

    Very long winded but I wanted to be clear.


    So whether a model is in text or in graphical form, the content of the model may be the same. Data schemas are a good example of this. I could choose to represent a data schema in UML or I could choose to do it in XML. The model is the same, but the representation is different.

    An area where graphical models struggle is process. So for example it is very easy to model process steps in text, but to represent the same process 'model' graphically is often difficult. Graphical models excel in other areas, and I'm sure that a UML data schema representation will be easier to comprehend than an XML one.

    DDD takes the ideas above to the next logical conclusion. What we have described above is a method of communicating a mental picture. So the model that counts is the model in people's minds. This model is most commonly expressed in language through conversation. Most domain experts aren't happy with UML, XML or code - but they are happy talking. Through talking we try to build a mental model that we can represent and test in our programming/modelling language.

    The hope of DSL's like the one you wrote is to move the programming language closer to the language of the domain, so that domain experts can represent their mental model in a form that is executable by a computer.

    Paul.
  134. MDA vs MDD vs DDD[ Go to top ]

    I think we have a fundamental difference in our understanding of "what is a model".
    ...
    So the model that counts is the model in people's minds. This model is most commonly expressed in language through conversation. Most domain experts aren't happy with UML, XML or code - but they are happy talking.
    We both see models as abstract, but I also think an abstract model deserves formalism. You don't, and so your fuzzy notion of a model can never be amenable to tooling. With your mindset, implementation handcoding is inescapable and there can be very little "beyond Java" other than petty tweaks such as dynamic typing and text DSLs.

    I think the impact of your mindset is to keep software development laborious and tedious and therefore costly, slow, and unreliable. Clearly we disagree on where the economic sweetspot is in the labor/automation substitution curve. But at least I know that the technologial progress of civilization is always pushing the sweetspot away from labor.
  135. MDA vs MDD vs DDD[ Go to top ]

    You don't, and so your fuzzy notion of a model can never be amenable to tooling.
    Hi Brian,
    Again you miss the point. The opposite actually, my notion of a model is very concrete: THE MODEL IS THE CODE. How more amenable to "tooling" can you get then that?

    Paul.
  136. MDA vs MDD vs DDD[ Go to top ]

    THE MODEL IS THE CODE.
    Your flip-flopping back and forth, desparately seaching for stable ground. Only the day before you insisted:
    So the model that counts is the model in people's minds. This model is most commonly expressed in language through conversation. Most domain experts aren't happy with UML, XML or code - but they are happy talking.
    I explicated my notion of an abstract model. You first insisted the model is in domain experts' heads and then said no, the model is the code. So which do you really believe? And why did you change your story?
  137. MDA vs MDD vs DDD[ Go to top ]

    THE MODEL IS THE CODE.
    Your flip-flopping back and forth, desparately seaching for stable ground. Only the day before you insisted:
    So the model that counts is the model in people's minds. This model is most commonly expressed in language through conversation. Most domain experts aren't happy with UML, XML or code - but they are happy talking.
    I explicated my notion of an abstract model. You first insisted the model is in domain experts' heads and then said no, the model is the code. So which do you really believe? And why did you change your story?
    Hi Brian,

    It is completely consistant.
    What I'm trying to distinguish is between the model and it's representation.

    Think about it....

    Know why do you think my preferred representation is code? Could it have something to do with the fact that working code is the only thing that will deliver value to my customer?

    Paul.
  138. MDA vs MDD vs DDD[ Go to top ]

    This model is most commonly expressed in language through conversation. Most domain experts aren't happy with UML, XML or code - but they are happy talking. Through talking we try to build a mental model that we can represent and test in our programming/modelling language.

    Hi Brian,

    What is controversial about the above? If you model in a different way then please let me know. Please show me a UML model that cannot be represented in code.

    UML is a graphical language, based on OO programming concepts. I sometimes draw UML or I sometimes just go straight to code, outlining the top level objects (stubs with no detail implementation, and not all the attributes). I quickly translate everything to code, so that I can test my model through execution (TDD). Any UML is ephemeral.

    I then quickly get feedback on the correctness of my mental model. It could be that I miss understood the domain expert (poor communication), or the domain experts "model" is incomplete or flawed in some way (muddled thinking). I take back my findings and discuss them in English (I don't show the domain experts a stack trace). My mental model improves and I modify the code model accordingly.

    I think we all do this, perhaps not as consciously. DDD is about creating a “ubiquitous domain language” that is shared by everyone in the team and the domain experts. This language shows up in the code in the name of classes, objects, methods etc just as it would in UML. This is consistent with everything I have been saying and with Martin Fowlers’ description of DSL’s.

    Paul.
  139. MDA vs MDD vs DDD[ Go to top ]

    ...my notion of a model is very concrete: THE MODEL IS THE CODE.
    If what you say were correct, then two differing codebases could never be manifestations of the same model. Yet Sun's Ace insists that the same formal model can be translated into alternate implementations. So when you disagree with me about this, you are also disagreeing with Sun Labs.

    A great implication of Ace's pluggable implementations is a devaluation of any one particular implementation. The generated codebase becomes less important than the formal model. If the deliverable is hand coded, then the monkeys who crafted it have produced something less valuable than if alternate implementations weren't easily producible. Ultimately what's devalued here is the programmer. The analyst's value is increased. Productive potential is transfered from the nose-picking programmers (you and I) to the charismatic analyst.

    Assembly language hackers once argued that their lovingly crafted sequence of assembler instructions had unique value, but the advent and domination of high level languages utterly laid waste their fantasy. The freely automating free market mostly condemned the assembler hacker to commercial extinction. Only a luddite would think this trend stopped. Management and investors have an immense incentive to continue this trend, since they mostly regard developers as a cost and certainly not social equals.

    What I find interesting about your contribution to this forward-looking "Beyond Java" thread, is that you saved your most passionate argument for defense of the status quo -- the decrepit way software has been developed for decades -- insisting that handcode matters and model automation doesn't.
  140. MDA vs MDD vs DDD[ Go to top ]

    Hi Brian,

    I can see that your only objection to what I am saying is the word CODE. If I had said Textual Domain Language or even better Graphical Domain Language or used a standard term like Universal Modelling Language then I'm sure you would be happy.

    I admire your work on BridgePoint and from what you say at least the initial incarnation proved to be a successful external DSL, for you and the team.

    I think we probably agree on most things. Agile software engineering techniques began to take off when practitioners began to notice that what they actually did was very different from the theory. So they started focusing on what they actually did day today and what worked. This was Software development as a craft, stuff like refactoring, iterative development (write a bit, then test a bit), emergent design etc. People like Martin Fowler and Kent Beck started to write this stuff down. It turned out that what they did was very different from Software development as a Science, which was the stuff that they read about, and sometimes wrote about themselves in books.

    I've got a possible explanation for why that is, but I won't go in to it here. Anyway, I noticed that when you talk about what you actually did, it is not that different from something I would do. XP was born when a group of practitioners decided to strip out all the theory and just do the minimal things needed to do to produce working code.

    I do the same, I do not look at it as a science, and I see it more as a craft, a creative practice. A lot of creative activities require a great deal of technical skill, e.g. Music or Painting, but the resultant output is still largely creative. I see software development as the same.

    So we model, we both speak to domain experts, we both do analysis, we both test our ideas through execution, and we both work iteratively and incrementally (do a bit, test a bit). I just do not see UML as a sacred language that should be elevated above everything else. I keep my eye on the ball by remembering the whole point behind everything I do, namely producing working code. In practice you probably do the same. Now when they finally produce an UML compiler, then UML will be working ‘code’. I just do not think that it will be the best/most productive working code out there.

    I'm sure when you created your external DSL it was because it was the most efficient way to generate clean code. I’m sure you didn’t do it as a means of furthering the "domination of high level languages", get real.

    To be fair, I like UML and use it a lot. I just try and keep it in context:

    http://martinfowler.com/bliki/UmlAsSketch.html

    The thing with value is working code, because that is what I get paid for delivering. I'll leave you with something I said awhile ago:
    The silver bullet doesn't exist. Developers just need to get better at programming computers. As developers begin to grasp this fundamental fact, they are beginning to adopt languages that give them the greatest degree of flexibility and expressive power (such as Ruby, Smalltalk and yes even Common LISP).

    We do not need to agree, and I have valued the interaction.

    Peace,

    Paul.
  141. MDA vs MDD vs DDD[ Go to top ]

    Hi Brian,I can see that your only objection to what I am saying is the word CODE. If I had said Textual Domain Language or even better Graphical Domain Language or used a standard term like Universal Modelling Language then I'm sure you would be happy.
    Me be happy? What really gets my seritonin squirting is thinking about model projection. That's when the problem space gets projected into the solution space. The problem space is scoped by an abstract formal model. The model is projected by cookie-cutter translation into an implementation that scopes the solution space.

    When you talk about handcoded Java or Ruby as the future, I cringe, since the benefits of model projection are lost. Their development just don't scale. Java and Ruby are very hard to project/transform. Stylesheets won't take them. They almost can't be rendered diagramitacally. All a Java or Ruby program can ever be is exactly what's written. Whereas an abstract formal model can be reused by projection, even projected onto an architecture not known when the model was drafted.

    Some of this flexibility can be gotten from aspect weaving, but frankly weaving is a low-level bandaid, especially in the era of postmodern programming, wherein our industry is increasingly dominated by integration. Anyway, Shlaer-Mellor had model coloring for feature crosscutting at least a decade before aspects were invented.
    I admire your work on BridgePoint and from what you say at least the initial incarnation proved to be a successful external DSL, for you and the team.
    I was an anylist end-user, also hand creating his own architecture of generative templates and runtime virtual machinery. This was when BridgePoint was being invented by a startup near RTP, NC in about 1990, using the Shlaer-Mellor notation since UML wasn't invented.
    In practice you probably do the same. Now when they finally produce an UML compiler, then UML will be working *code*. I just do not think that it will be the best/most productive working code out there.
    Your complaint of suboptimality was first raised and then retracted against high level languages, and then again with virtual machinery. It's a worn-out criticism; Moore's Law utterly favors the pursuit of labor savings. Betting against the eventual emergence of model translation as a trend just don't make sense to me, not long term.

    Eclipse's completion tricks makes the QWERTY keyboard comparatively less and less useful. Folks are actually programing more and more with their mice. That proves that the captured words are nothing but symbolic choices and decisions. And this makes diagraming a potential fit.
    I'm sure when you created your external DSL it was because it was the most efficient way to generate clean code.
    As a user, I went with the vendor's action language, a dialect of SQL.
    The thing with value is working code, because that is what I get paid for delivering.
    An architect doesn't get paid for delivering a building. He's paid to design an abstraction, a template that can potentially cut many cookies. The dude with the hammer hardly gets paid at all, and might even be illegal.
  142. MDA vs MDD vs DDD[ Go to top ]

    All a Java or Ruby program can ever be is exactly what's written.

    That's a rather absurd statement.
    Whereas an abstract formal model can be reused by projection, even projected onto an architecture not known when the model was drafted.

    I don't think you even need metaprogramming to achieve this, much less a heavy-weight MDA tool chain. Or to fail to achieve it for that matter. It just takes good/bad engineering skills.
    Your complaint of suboptimality was first raised and then retracted against high level languages, and then again with virtual machinery. It's a worn-out criticism; Moore's Law utterly favors the pursuit of labor savings. Betting against the eventual emergence of model translation as a trend just don't make sense to me, not long term.

    I think Paul's accusation of suboptimality has to do with developer productivity, not computational efficiency. Unfortunately Moore's Law doesn't apply to the human mind yet.
    Eclipse's completion tricks makes the QWERTY keyboard comparatively less and less useful. Folks are actually programing more and more with their mice. That proves that the captured words are nothing but symbolic choices and decisions. And this makes diagraming a potential fit.

    Personally, I type a lot faster than I use a mouse.
    An architect doesn't get paid for delivering a building. He's paid to design an abstraction, a template that can potentially cut many cookies. The dude with the hammer hardly gets paid at all, and might even be illegal.

    Code is the template. One set of delivered code can be deployed in a cookie-cutter fashion millions of times, potentially for a lot less than a building.
    What really gets my seritonin squirting is thinking about model projection.

    Wasn't that a Bile Blog topic? Sorry.....
  143. MDA vs MDD vs DDD[ Go to top ]

    Hi Erik,

    I didn't see the point in responding to Brian. I'm glad someone has done it for me.

    Thanxs.

    PS. If there are any real issues you would like a sounding board on, I'm more than willing to continue the discussion (minus the distraction of course).

    Cheers,

    Paul.
  144. Unfortunately Moore's Law doesn't apply to the human mind yet.
    So many folk take Moore's Law literally and have no imagination of its consequences. From geometricly growing circuit density comes many derivative phenomena including software of ever growing sophistication, and so always better development tools. In this sense de-skilling is a consequence of Moore's Law, so Moore's Law very much regards which brains can contribute and how.

    This makes Moore's Law an empirical analog of the parable of John Henry's spike-hammering heart attack. Ie, industry always trends toward automation. It's luddite to bet against Moore's Law, or to deny its relevance to labor-saving software, including development tools. If you bet against Moore's Law, you'll suffer Mr. Henry's heart attack. Ie, commercial extinction.

    So Moore's Law devalues brain power, and a knowledge worker is forced to climb ever higher up the value chain and further from the details of computing machinery. Today's lucrative knowledge work is tomorrow's worthless grunt work. Or do you wish folks could still get paid for pounding spikes?
  145. (I don't know if those are words, but I'll move on)

    When a task is "de-skilled" the skill required to perform it is reduced or eliminated. For example, a couple years ago I designed a cost modelling application. The model and data sources were/are pretty complicated, but the application itself is relatively easy to understand. Prior to deployment, performing the particular form a analysis required a lot of knowledge and a lot of effort. Afterwords it was quick and easy. Please note that there is a downside to this. People now receive lots of information that they sometimes don't understand and misuse, and it has had negative impacts. But overall it's been a success.

    When a task is "Up-skilled" the skill required to perform the task is increased, while the task can be performed with significantly more efficiency. The economy of up-skilling a task depends on the efficiency gains and the cost of the up-skilled worker vs the legacy worker. Replacing a line-worker in the factory with a robot that needs to be programmed is an up-skilling. You take one highly skilled industrial robot programmer and replace a whole slew of less skilled factory workers.

    When a task is obsoleted, automation eliminates the need for the skills required by the task. For example, compilers have near (but not quite, and never will) obsoleted assembly language programming.

    What I'm advocating is (and I think Paul is, too) the up-skilling of software development. Currently software development is wrought with tedius, mind numbing tasks. This creates a market for programmers who aren't particularly skilled. Through dynamic (meta)programming, the skilled software engineer can use his knowledge to make the tedium go away. He can even incrementally do it as he becomes more familiar with the domain. The software engineer is not unfamiliar with the nuts-n-bolts - he is familiar enough to abstract them away.

    What you're advocating is a combination of de-skilling and obsoleting. Under your scenario, the software developer primarily needs to be familiar with the problem domain, and does not need to be concerned with the nuts-n-bolts. The tedium programmer is obsoleted.

    You also exalt a new class of MDA tool creators.

    I don't think your solution will work because I don't think you can successfully eliminate the good software engineer's skills, and by centralizing "tedium elimination" with the MDA tool creators I think you'll create more tedium than you eliminate, because customers ultimately will want to tweak results of the MDA translation process into an unmaintainable mess.

    Ultimately your solution says: enterprises shouldn't try to solve any problems that don't fit the pattern of problems that have been solved before.

    Novel problems are where the real value is. Too much time is spent on developing the same CRUD application with slightly different data. Businesses need applications that give them a competitive advantage. That means applications that their competition doesn't have and can't simply buy from some vendor or have built by some consultant or big IT outsourcer.

    The bottom line is I think the highest gains will come from enabling smart software engineers to work smarter, not from pre-packaging cookie-cutters for implementing components that all fall into the same repetitive patterns.
  146. The economy of up-skilling...
    I agree that upskilling can be economical. You gave the illustrative example of a robot programmer replacing a laboring crew. But the scenario I care about is where the developer is held constant and the tooling is the variable. In this case the dimension that matters most is cognitive load.

    Ruby has a higher cognitive load than Java, and that's why Ruby can't supplant Java. Eg, determining a Java method's parameter types is trivial, but this analysis can be quite involved with dynamically-typed Ruby. This is the kind of complication that makes Ruby refactoring suck.

    My love of model projection is predicated on it reducing cognitive load. Abstraction and cognitive load are polar opposites. The reduction of cognitive load is synonymous with increased abstraction, which is the manifest destiny of software engineering. That's why it deserves a shout out in this forward looking discussion.
    Currently software development is wrought with tedius, mind numbing tasks. This creates a market for programmers who aren't particularly skilled.
    Quite the opposite. The disorientation and other error-prone perils of repetitive pseudo-complexity actually weeds out lesser monkeys. Ie, the tedium artificially raises the skill bar and is a barrier to entering/thriving in the talent pool.

    The labor market is more easily entered when less knowledge of the solution space is needed. The ideal of abstract modeling hints at the imaginary possibility of development with only knowledge of the problem space. Not only does abstract modeling promise to make development more accessible, but it also promises to make development skills more portable, more widely applicable. This dream is so sweet for me to ponder. Anyway, if model projection actually guarantees anything, it's that developers do less of the tedium you complain of.
    What you're advocating is a combination of de-skilling and obsoleting.
    Heck yeah, with the developer climbing up the value chain, into the problem domain, with knowledge transfer and analysis increasingly preoccupying him. The developer becomes more accessible to his audience and his investors. This gives him more social parity and makes him more charismatic. Again, so very sweet to consider.
    Ultimately your solution says: enterprises shouldn't try to solve any problems that don't fit the pattern of problems that have been solved before.
    All frameworks do this to some extent. The advantage of model projection is its ability to plug in new architectures without disturbing the abstract formal model. This reduces the effort of speculatively evaluating architectural alternatives. It also means an application can more easily shed an outdated architecture.
  147. When a task is "Up-skilled" the skill required to perform the task is increased, while the task can be performed with significantly more efficiency.
    I totally agree Erik. This is why I see software as analogous to a creative art, rather than analogous to industrial production. Musicians, Fashion Designers etc, must still master their craft, undergoing intense training often under the tutelage of an established professional.

    Once they become masters themselves though, the focus of their work moves from gaining technical skills to being creative. So 98% perspiration (practice and training) followed by 2% inspiration (invention and innovation).

    Bespoke software is the same, it is not production; it is new product innovation. Akin to what they do at Toyota when they design a new car. Each instance is a one off, new to the world, a brand new creation.

    Such a process cannot be automated; you cannot automate invention and innovation. This is where we went wrong when we tried to apply a "defined process" to Software Development. Those books I spoke of earlier that showed software development as a neat repeatable sequence of steps (Waterfall). In truth it doesn't work like that. Jim Highsmith has written widely on the subject:

    http://www.adaptivesd.com/pubs.html

    If you went to Toyota today, you will see car designers using the latest computers, but they still have mastered their craft. They still model in clay, they still draw sketches. They still take inspiration from classical master pieces that where built long before computers, like the Ferrari 250 GTO.

    The computer helps, but it is no replacement for a deep understanding of automotive design. In the same way that the classical musician wasn't made obsolete by the invention of the synthesizer, so will the skilled programmer be around for a long time yet. If anything I'm noticing that “master craftsman" in Software are becoming even more in demand as Managers begin to realise that the most important factor for project success is not tools, but high quality people.

    P.
  148. ...I see software as analogous to a creative art, rather than analogous to industrial production.
    That was unclear till now, and it makes you a romantic. It also makes me wonder which you enjoy more: analysis, design, or coding? I prefer analysis, though obviously you and I are experts at all.

    I say romantic, since you're advocating handcrafting for things that technically don't need it. You're inviting human error, inefficiency, and other labor costs. The Shlaer-Mellor way of model projection proves that only analysis needs minding. Design and coding are automated.
    Such a process cannot be automated; you cannot automate invention and innovation. This is where we went wrong when we tried to apply a "defined process" to Software Development.
    Model projection understands that invention and innovation can't be automated. What model projection does is channel creativity toward abstraction, into a sandbox of analysis.

    What model projection does *not* do is impose a defined development process. Even MDA doesn't do that. OMG's MDA specification gives four methodologies for applying projection, and then OMG added a fifth, Agile MDA. Clearly model projection could be used anyway you want. My style is Scrum, and I have no trouble reconciling this with model projection.
    Those books I spoke of earlier that showed software development as a neat repeatable sequence of steps (Waterfall). In truth it doesn't work like that.
    Sure, iterative beats waterfall. That wisdom is part of Scrum and XP. Model projection doesn't favor waterfall. That's an often repeated luddite fallacy.

    The team's understanding of requirements is always slowly growing, so the model is always changing. Traditionally, requirements flux causes loss of work, and the lost work was (expensively) handcrafted. Model projection eliminates so much of this waste. The analyst never loses handcrafted code or design, these were generated.
  149. The analyst never loses handcrafted code or design, these were generated.
    This gives the analyst more freedom to explore the subject matter. He can speculatively change his model and regenerate quickly. Within the knowledge domain, he can take greater risks than a traditional programer, since the cost of a mistake is less.
  150. The team's understanding of requirements is always slowly growing, so the model is always changing. Traditionally, requirements flux causes loss of work, and the lost work was (expensively) handcrafted. Model projection eliminates so much of this waste. The analyst never loses handcrafted code or design, these were generated.

    Your assumption here is that it requires less effort to update the "model" than it is to update "code."

    I contend that there's absolutely nothing that makes this true. I've used Python's metaprogramming capabilities extensively. I know from experience that Python code where metaprogramming is extensively used is easier to update than a model in UML. In UML you constantly bang into the limitations of the modelling tool. In Python you constantly notice ways that your code can be made more elegant - thereby accelerating your ability to change it to adapt to new requirements.

    IMHO, well written code is easier to change than a UML model. I think that applies to all code, including Java and C++ (well, maybe not assembly...). The reason people think code is "expensive" to change is that too many programmers are either too rushed or simply lack the skill to write good code.

    Which is why I think software development should be up-skilled. Manufacturing laborers have given way to manufacturing engineers. Programmers should give way to software engineers. Programming is a skill, software engineering is a discipline.

    Paul has the same idea as me, he just seems to emphasive the creative end of it (I challenge you to find a good engineer who isn't creative within his domain) while I emphasive the scientific side of it (I challenge you to find a good artisan who doesn't have a solid understand of the principles underlying his craft).

    But what many managers want to programmer the technician, who has engaged in rote memorization of libraries and frameworks with no understanding of the principles behind them, and will blindly obey the whims of management.

    BTW - I have nothing against technician-type people. Technicians can be every bit as intelligent and skilled as engineers. They just have different skills and are suited to different tasks.
  151. The reason people think code is "expensive" to change is that too many programmers are either too rushed or simply lack the skill to write good code.
    +1 I couldn't agree more.
     
    Programmers should give way to software engineers. Programming is a skill, software engineering is a discipline.Paul has the same idea as me, he just seems to emphasive the creative end of it.
    Again, I totaly agree, and in another discussion I would be Emphasising the engineering aspects. In short, good developers are highly skilled professionals, not technicians. I have heard the ration of 1 to 10 used to describe the difference in productivity between the top Software Engineers and mediocure ones.
    But what many managers want to programmer the technician, who has engaged in rote memorization of libraries and frameworks with no understanding of the principles behind them, and will blindly obey the whims of management.BTW - I have nothing against technician-type people.
    Again I agree - I've been a manager too, and to be honest I think most managers resent the fact that a top quality freelance softeware Engineers earn significantly more then they do. They would much rather have "kids" fresh out of college working for them, and available to take the blame for their mistakes.

    I hear what you say about Python. I've played with it a bit, but was put off by the surface syntax and the need to pass a reference to self to every method. Having said this I have heard other people say good things about Python too, and it does seem to have that "get into the guts" meta-feeling that you get with other dynamic languages. I would be interesting to know more about "meta-programming" and Python. Any pointers?

    Regards,

    Paul.
  152. Python Pointer[ Go to top ]

    Most of the material I used to learn Python came from http://www.python.org.

    Here's a couple links on python metaprogramming:

    http://www.python.org/pycon/2005/papers/36/pyc05_bla_dp.pdf

    http://www.onlamp.com/pub/a/python/2003/04/17/metaclasses.html

    http://www.python.org/pycon/dc2004/papers/24/metaclasses-pycon.pdf

    My favorite feature isn't metaclasses, though, it's descriptors. The standard "property" descriptor isn't that great, because it just provides a wrapper to make get/set methods look like normal fields. But the possibilities for custom descriptors, especially when coupled with metaclasses that can manipulate them at class instantiation time, seem endless.

    One of the more interesting things I've done with metaclasses and descriptors is "compile time" (meaning before the main part of the program starts running) code validation.

    It took me about 6-8 months to warm up to Python. At first I hated the indentation rules and the having to pass "self" (or whatever you decide to call it) as a parameter, among other things. I've come to appreciate the indentation rules, and the "self" thing is really just a quirk. It took metaclasses and descriptors for me to overcome many of my initial impressions. I was also bothered by the interactive interpretter approach to teaching it. I wanted a tutorial that said create a file called hello.py, type such-and-such in it, save it, and then type python hello.py to execute it. But no, many tutorials inititally focus on interactive use. Really, that's a good thing, because now I think the interactive interpretter is a key feature. But it was a pain at the time.
  153. Python Pointer[ Go to top ]

    Hi Erik,

    Thanks. I've had a brief look. Some of the ideas look similar to things I've seen in Ruby. As I understand it both Python and Ruby are fully interpreted languages, so no compilation stage to byte code. This seems to make it easier for the program to self modify, as an interpreter can be more readily used to generate code on the fly (is this true?).

    Anyway, I'm sure you can't do this level of meta-programming in Smalltalk, but I could be wrong, perhaps Steve knows...

    At a first glance at the slides, the meta capability seems to be more extensive than Rubys, I could be wrong about this too, but I haven't read anything on Ruby that takes meta-programming to this level.

    At the moment I program mainly in Java and use Ruby for automated testing and scripting. I love the cleaness and simplicity of Smalltalk and use it when ever I can just for fun.

    On my todo list of new languages is LISP. I've got a theory that LISP could hold the key to "the theory of everything" as far as computer languages are concerned. I've blogged on the subject:

    http://pab-data.blogspot.com/2005/11/lisp-and-unified-theory-of-everything.html

    It looks like I need to add Python to my todo list also. As far as creating DSL's and meta-programming are concerned, Python does seem to be a bit special. I need to look into it further.

    Thanks for the pointers.

    Cheers,

    Paul.
  154. Python Pointer[ Go to top ]

    As I understand it both Python and Ruby are fully interpreted languages, so no compilation stage to byte code.

    I know Python is translated into bytecode, and I think Ruby is, too. That's why there's talk of eventually making one dynamic language VM for Python, Ruby, and Perl.
    This seems to make it easier for the program to self modify, as an interpreter can be more readily used to generate code on the fly (is this true?).

    None of the metaprogramming I've done in Python has involved code generation, although in other languages code generation (or it's cousin byteode instrumentation) would be used to achieve similar effects. In Python it's all indirection.
    At a first glance at the slides, the meta capability seems to be more extensive than Rubys, I could be wrong about this too, but I haven't read anything on Ruby that takes meta-programming to this level.

    I think the Ruby folks would deny this. I've seen numerous posts claimin Ruby has more extensive metaprogramming capabilities than Python...but frequently they mischaracterize Python if they include any details at all.
  155. Python Pointer[ Go to top ]

    As I understand it both Python and Ruby are fully interpreted languages, so no compilation stage to byte code.
    I know Python is translated into bytecode, and I think Ruby is, too. That's why there's talk of eventually making one dynamic language VM for Python, Ruby, and Perl.
    OK, but the byte code is not stored anywhere right? Unlike the .class files in Java. Smalltalk statically stores byte code too in its Image. When I run Ruby code through the debugger I notice that the code appears to be 'interpreted' twice. First it looks as though it parses the source code, interpreting and executing all those 'def' statements, then it seems to execute the resultant bytecode.

    I've always wanted to know what exactly is going on, but have never spent the time. If anyone can shed some light, I would appreciate it.

    Cheers,

    Paul.
  156. Python Bytecode[ Go to top ]

    Python gets compiled to *.pyc files. *.pyc files can be distributed independently of the corresponding source files.

    When a Python script is executed or a Python module imported, it is executed from top to bottom.

    For example, let's say your file has the following class declaration:

    class Foo(object):
        fooLog = open("foolog.txt", "+") # open log
        def __init__(self, val):
            self.val = str(val)
            fooLog.write("Constructed a Foo with val: %s" % self.val)

    When class is parsed the interpreter will build a dictionaryu for Foo. The statement fooLog = ... will result the open function being executed and the resulting file object being stored in the dictionary with the key fooLog. The def __init__... statements cause a function object to be created and then put into the dictionary. When this is done, the constructor for the class type (which is the base class for all metaclasses) is invoked with an unitialized instance of a type object, the name of the class, a tuple containing the instance of the type object for "object" (and any other base classes), and the dictionary. The constructor for type does various operations, and if you extend type to define your own metaclass, and specify that Foo is of that metaclass, then your code is executed instead of the constructor for type (typically the first or last thing it will do is invoke the constructor for type).

    This means you have to be careful of the order in which you define classes. For example, the following code is invalid:

    class Foo(object):
        bar = Bar()

    class Bar(object): pass

    When the definition of Foo is being processed, there's no such thing as a Bar, and consequently a NameError is thrown. It's a rather annoying limitation.
  157. Python Bytecode[ Go to top ]

    Thanks Eric,

    Sounds similar (but different) to Ruby. Ruby seems to execute from top to bottom too. It doesn't compile to byte code files though.

    I've only been using Ruby for a few months. I'll find out how it does its stuff and post it here.

    BTW from your metaclass description, it doesn't sound that different to Smalltalk, so the metaprogramming in Python should be possible in Smalltalk too. Still waiting for Steve to chime in...

    I've only done rudimentary metaprogramming in Smalltalk (overriding new as a factory method, etc), so it would be nice to know that I could use the metaprogramming techniques described in your Python slides in Smalltalk aswell. I will investigate...

    Watch this space.

    Paul.
  158. Hi Erik,

     

    I've done a little investigation into dynamic languages and metaprogramming. As you know the metaprogramming in these languages comes from treating classes as first class objects. This is achieved by adding in layers of indirection (dictionaries) at runtime, layers that are optimised out during compilation in static languages.

     

    The metaprogramming you speak of is a big reason why dynamic languages are so good when it comes to creating DSL's. This ability allows all programmers to play the role of "language designer" and create their own DSL's (APIs, Frameworks etc), by manipulating the base language. In languages like Java it is very difficult to achieve the same without resorting to external XML or bytecode manipulation (cglib).

     

    IMO this is why much of the innovation in software today like continuation servers are still coming from dynamic languages. Innovation in static languages like Java is restricted to what the language designers have chosen to allow.

     

    Amongst the dynamic languages there are differences though. It appears to me that languages like Ruby and Python are an attempt to bring "Smalltalk-like" programming to the main stream. The main difference being that these languages are file based. So the class and object dictionaries I mentioned earlier are created at runtime time from source files on disk.

     

    In Smalltalk these dictionaries are in memory all the time. You add to them each time you import new classes into your Image. This is analogous to installing a new application into Windows, and the whole Image can be persisted to disk when you shutdown your machine.

     

    The Image thing as always been seen as a downside of Smalltalk as it creates an impedance between the Smalltalk environment and your OS. But if the Image is your OS then using an Image becomes a significant advantage. If you look at it this way then the level of granularity of components available at runtime is reduced down to a single class, in fact it is reduced down to a single method implementation.

     

    Croquet which I mentioned in previous posts builds on this idea to create "applications" that have no boundaries and can interact with each other at the object level seamlessly. This brings a whole new meaning to "drag and drop". Language Work Benches also talk of an "abstract program representation" that can be exported (projected) in various ways.

     

    It seems to me that we are taking a long time fully embracing Object Orientated thinking. When we do files will just become another object export/import format, and our Operating Systems will consist of Objects dictionaries much like the Smalltalk Image.

     

    To me it is frustrating that these ideas have been around for such a long time, and it is taking so long gaining general acceptance. To me the advantages are obvious. If you take a look at the Why Smalltalk website, there is evidence of Smalltalk based research languages that do all of the above:

     

    http://www.smalltalk.org/versions/SlateSmalltalk.html

     

    Other than the syntax, which I personally like, Smalltalk-80 should be "old news" by now. In 2006 we should have dynamic OO languages that are significantly more advanced. Yet to me it appears that we are still playing catch up to a language which is over 30 years old. I had hoped that both Python and Ruby would add something new, but other than being file based (which in a sense is a retrograde step), conceptually they add little.

     

    I could be wrong, and I am happy to be corrected.

     

    Regards,

     

    Paul.
  159. This ability allows all programmers to play the role of "language designer" and create their own DSL's (APIs, Frameworks etc), by manipulating the base language. In languages like Java it is very difficult to achieve the same without resorting to external XML or bytecode manipulation (cglib).
    What does bytecode manipulation have to do with DSLs?! DSLs need custom parsing, which has nothing to do with bytecode. DSLs presuppose a custom frontend to the compiler, nothing more.
  160. This ability allows all programmers to play the role of "language designer" and create their own DSL's (APIs, Frameworks etc), by manipulating the base language. In languages like Java it is very difficult to achieve the same without resorting to external XML or bytecode manipulation (cglib).
    What does bytecode manipulation have to do with DSLs?! DSLs need custom parsing, which has nothing to do with bytecode. DSLs presuppose a custom frontend to the compiler, nothing more.

    Go read the links Paul posted to Martin Fowlers essays and language workbenches. Dynamic languages (and, to verying degrees, all OO languages) enable the creation of in language DSLs.
  161. This ability allows all programmers to play the role of "language designer" and create their own DSL's (APIs, Frameworks etc), by manipulating the base language. In languages like Java it is very difficult to achieve the same without resorting to external XML or bytecode manipulation (cglib).
    What does bytecode manipulation have to do with DSLs?! DSLs need custom parsing, which has nothing to do with bytecode. DSLs presuppose a custom frontend to the compiler, nothing more.
    Since Martin Fowler sort of made up the term DSL, I don't think either of us are qualified to provide a precise definition. In terms of bottom-up design, building up your language abstractions to be closer to your problem domain, it is often advantageous to introspect application code from within your DSL framework. A good example of this is Hibernate. To understand how to persist a given object it is useful to understand and intercept it's method interface. In java this is only possible using the dynamic proxy API (Java 1.3+ I think). For this the class must have an associated Java Interface. I do not know the details of Hibernate, but cglib is often used to generate bytecode on the fly, so reflection can be used to identify the methods and cglib used to generate an Interface before creating a dynamic proxy. I believe other frameworks do something similar. For example I believe JDO uses cglib to modify your classes adding additional byte code during the build process.

    In a dynamic language none of this is necessary. All objects (including classes) exist in a dictionary and can be looked up by reference at runtime. All a proxy need do is delegate "does not understand" method calls to another object. No statically hardwired pointers means no Interfaces or special "dynamic proxy API" needed, and no byte code manipulation.

    Paul.
  162. This ability allows all programmers to play the role of "language designer" and create their own DSL's (APIs, Frameworks etc), by manipulating the base language. In languages like Java it is very difficult to achieve the same without resorting to external XML or bytecode manipulation (cglib).
    What does bytecode manipulation have to do with DSLs?! DSLs need custom parsing, which has nothing to do with bytecode. DSLs presuppose a custom frontend to the compiler, nothing more.
    Since Martin Fowler sort of made up the term DSL, I don't think either of us are qualified to provide a precise definition.
    Speak for yourself. I do know what a DSL is. It also seems you don't understand the mechanics of supporting a DSL. It has nothing to do with bytecode manipulation. The Hibernate example of bytecode manipulation that you gave has nothing to do with supporting a DSL. How much have you really thought about this stuff?
  163. This ability allows all programmers to play the role of "language designer" and create their own DSL's (APIs, Frameworks etc), by manipulating the base language. In languages like Java it is very difficult to achieve the same without resorting to external XML or bytecode manipulation (cglib).
    What does bytecode manipulation have to do with DSLs?! DSLs need custom parsing, which has nothing to do with bytecode. DSLs presuppose a custom frontend to the compiler, nothing more.
    Since Martin Fowler sort of made up the term DSL, I don't think either of us are qualified to provide a precise definition.
    Speak for yourself. I do know what a DSL is. It also seems you don't understand the mechanics of supporting a DSL. It has nothing to do with bytecode manipulation. The Hibernate example of bytecode manipulation that you gave has nothing to do with supporting a DSL. How much have you really thought about this stuff?

    Hi Brian,

    Ignorance is bliss... :^)

    Try reading the links I supplied as Erik suggested.

    Paul.
  164. This ability allows all programmers to play the role of "language designer" and create their own DSL's (APIs, Frameworks etc), by manipulating the base language. In languages like Java it is very difficult to achieve the same without resorting to external XML or bytecode manipulation (cglib).
    What does bytecode manipulation have to do with DSLs?! DSLs need custom parsing, which has nothing to do with bytecode. DSLs presuppose a custom frontend to the compiler, nothing more.
    Since Martin Fowler sort of made up the term DSL, I don't think either of us are qualified to provide a precise definition.
    Speak for yourself. I do know what a DSL is. It also seems you don't understand the mechanics of supporting a DSL. It has nothing to do with bytecode manipulation. The Hibernate example of bytecode manipulation that you gave has nothing to do with supporting a DSL. How much have you really thought about this stuff?
    Try reading the links I supplied as Erik suggested.
    No, this is evasion on your part. I'm not that easily sent into the weeds. You already said you aren't qualified to understand DSL theory. Why should I trust DSL links you post?

    You were talking about DSL and brought up Hibernate and bytecode manipulation. I complained of irrelevance. But you wouldn't drop it. So please explain the relevance.
  165. I've done a little investigation into dynamic languages and metaprogramming. As you know the metaprogramming in these languages comes from treating classes as first class objects. This is achieved by adding in layers of indirection (dictionaries) at runtime, layers that are optimised out during compilation in static languages.

    I have a hunch the 80-90% of the gain of treating classes as first-class objects at runtime could be had by allowing code to be executed that can manipulate the AST at compile time.

    Java is heading in this direction, but IMHO the way it's going to do it - with annotations and annotation processors - is too orthogonal to the process of writing classes.
  166. I have a hunch the 80-90% of the gain of treating classes as first-class objects at runtime could be had by allowing code to be executed that can manipulate the AST at compile time.
    I think I understand what you mean here. So the "hardwiring" could be "re-wired" during compilation perhaps allowing you to add in cross-cutting concerns AOP style right? But this would still mean that changes could not be made later without re-compiling.

    I like the idea of "little applications" that can be wired together at runtime in new and interesting ways. Sort of like how you use pipes to link together awk, grep etc in Unix. This ability would allow developers to avoid the big "monolithic" application, and get better component re-use. Again, Croquet holds out this promise also.
    Java is heading in this direction, but IMHO the way it's going to do it - with annotations and annotation processors - is too orthogonal to the process of writing classes.
    I'm not aware of what exactly is going on with annotations at the moment. When annotations came out is when my interest in Java "innovations" began to seriously wane. It was about the time I decided to dig out my copy of the Purple Book (the Smalltalk bible :^)). But seriously, I did notice that Martin Fowler made a positive comment on annotations. So again, any pointers would be appreciated.

    Cheers,

    Paul.
  167. I hear what you say about Python. I've played with it a bit, but was put off by the surface syntax and the need to pass a reference to self to every method.

    I played around with Python for a while last year. I found it great for all sorts of quick scripting of system tasks, but after a while I found things about it that really put me off, such as the limits of variable scoping. That, and participating in this thread, led me to Ruby.
  168. Automation vs the romantic.[ Go to top ]

    Your assumption here is that it requires less effort to update the "model" than it is to update "code." I contend that there's absolutely nothing that makes this true.
    Dude, think about it logically, not emotionally, not romantically.

    Since design and coding are automatic and defect free, entire categories of bugs are eliminated. That's how limiting human input to analysis ensures that only analysis bugs occur. So the bugs that plague handcode are a superset of the bugs that plague a generated codebase. Hence model maintenance is less laborious than handcode maintenance. The competitive advantage that this confers imposes yet another economic imperative for more automation in the development pipeline.
    In UML you constantly bang into the limitations of the modelling tool.
    Since the invention of Java there's been amazing growth in the sophistication and productivity of Java IDEs. Why do you assume this trend can't apply to MDA studios? What drives the evolution of Java IDEs is mindshare.

    Dynamic languages had their chance and didn't win the tooling arms race, didn't win the mindshare. But that don't mean the mindshare can't be poached by grander ideas than Ruby that make better defined promises than Ruby, such as the elimination of entire categories of bugs.
    The reason people think code is "expensive" to change is that too many programmers are either too rushed or simply lack the skill to write good code.
    You're unwittingly making the case for more automation.
    Manufacturing laborers have given way to manufacturing engineers. Programmers should give way to software engineers. Programming is a skill, software engineering is a discipline.
    This seems self-congratulatory on your part, which explains why you consider the engineer's role to be sacred and above consideration for further automation. If you sincerely want to elevate creativity, then you'd appreciate the productive potential of giving a generative architecture to an analyst.
  169. Up-skilling and Productivity[ Go to top ]

    I remember when I first started programming back in 1990. Back then, when the "production line" waterfall metaphor was at it's hieght, teams of 25-30 developers where common place. At that time we produced plenty of models. A requirement spec, Analysyis model, design model, deployment model etc. An artifact was created and maintained at each stage of "production". The team was split into groups with differing skills. An "Analyst" couldn't do a "coders" job and "coders" knew very little about "Analysis".

    Over the years thing have changed. Some of the most productive teams have used Smalltalk. On such projects 2-3 developers are common place. Each developer performs all of the old roles, switching between roles seamlessly as needed. It is quite common for these teams of 2-3 to produce the same amount of software (value) as teams of 25-30 people.

    I can see two reasons for this:

    1. The team members are extremely highly skilled, with a deep understanding of all aspects of software development.

    2. They are using tools of their own choosing, that allow them to leverage their skills and become highly efficient and productive.

    We have all noticed a step in productivity between C++ and Java. Bruce Tate has written about a similar step in productivity between Java and Ruby. I believe even a bigger step is achievable with Smalltalk. I see the goal as us all achieving the best "up-skilling" available, and building upon that by evolving languages and tools that make us efficient and productive.

    This is not and will never be the same as automation.

    Paul.
  170. MDA vs MDD vs DDD[ Go to top ]

    ...my notion of a model is very concrete: THE MODEL IS THE CODE.
    If what you say were correct, then two differing codebases could never be manifestations of the same model.
    Why should a code be any less able to support multiple implementations then a graphical model?

    A computer language that was extremely successful at this was 'C'. It allowed Unix to be ported to several different hardware platforms. The compiler would translate the code at compile time fto differing hardware platform representations (machine code). The back-end of the gcc compiler can still be used to do "cross-compilation" today.

    Lisp, Smalltalk, Java etc do the same, the difference being that the translation to a "platform specific model" occurs at runtime dynamically.

    That's why all this talk about Platform Independent Model (PIM) in MDA is all a bit silly. We have had platfrom independence for decades. Again Martin Fowler makes the point:

    http://martinfowler.com/bliki/PlatformIndependentMalapropism.html

    You need to expand your idea of a "model". I like the idea of an "abstract program representation", kept in memory that is independent of both the programmers representation (text or graphics) and the platform representation (byte code, machine code etc). Smalltalk supports this idea in part with its concept of an "image". Language Work Benches are looking to take this idea much further.

    Regards,

    Paul.
  171. In order to bridge this gap, MDA would be better served if you talked about what MDA can practically achieve today...
    I used Shlaer/Mellor, which is a primordial form of model compilation akin in scope to MDA. The target application was residential fiber telecom, including television and telephony. The target platform was firmware for C++ and RTOS. The undeniable proof that model compilation succeeded was that many of the developers were never exposed to C++ or RTOS.

    These developers (including myself) specified all of the structure and some of the behavior with graphical diagrams which were then automatically translated into C++. Much of the behavior wasn't captured diagramatically, but it was coded in a very high level action language that was biased toward model navigation, manipulation, and constraints and automatically translated into C++. The tooling allowed developers to make complicated and fully specified structural and behavioral contributions. Many of the developers knew only the problem space and were kept entirely away from the solution space (C++/RTOS). This startup was later bought by AT&T.
  172. The undeniable proof that model compilation succeeded was that many of the developers were never exposed to C++ or RTOS.
    Ivar Jacobsen, one of UML's inventors, had heard of the unbelievable magic of model compilation and came to our office for a demo. At that time the UML inventors were elaborationists who denied the practicality of model compilation and used the experimental nature of model compilation as a chance to bash it -- ie, FUD. A very few years after our demo, the UML inventors were evangilizing MDA, their own brand of model compilation.
  173. Hi Brian,

    Pretty impressive. It sounds as though you used a Domain Specific Language. Martin Fowler has an article on the subject you may have seen:

    http://www.martinfowler.com/bliki/DomainSpecificLanguage.html

    It sounds as though what you did wasn't MDA in the standardised OMG sense. It sounds more akin to what you could achieve in Unix using parsers like yacc to do code generaton. Martin fowler talks about these, he calls them external DSL's.

    In the same article he also classifies what I've been talking about with Lisp and Smalltalk as an "internal DSL".


    As for Standard MDA (OMG MDA), he is pretty scathing. He identifies three MDA camps, of which the only one he appears to have respect for is T-MDA which is the one closes to what you describe:

    http://martinfowler.com/articles/mdaLanguageWorkbench.html

    He seems to favour something that he calls a "Language Work Bench":

    http://martinfowler.com/articles/languageWorkbench.html#ModelDrivenArchitecturemda

    Quite interesting. All the "external DSLs" examples he describes are domain specific "code generators" of one form or the other. These are OK IMO if you are solving the same type of problem again and again (a bit like Ruby on Rails), but for pure bespoke work the initial investment of writing the generator may not provide a return.

    As a general approach I'm still in the "internal DSL" camp. Especially for complex "people centric" domains. I note that your domain was "technology" (telecoms) centric.

    Thanks for the debate. Having to explain why you do something really does make you look at your assumptions. When I get time I will look more into Application Work Benches.

    But until I see working examples, I'll be sticking with "Internal DSL and Smalltalk", or "Internal DSL and Ruby" when I get the chance, and "hybrid Internal/External DSL and Java/XML" when I don't :^)

    Cheers,

    Paul.
  174. Reading through the DSL articles I listed in my last post, has tied together a number of ideas for me. First of all the goal of moving your programming language closer to your problem domain.

    As Martin Fowler describes, there have been several approaches to this including the "little languages" approach common to most Unix people (awk, sed etc). In stuff I've read by Alan Kay, he says that this was also his intention when he "invented" OO programming with Smalltalk. So when you define a new class, you are in effect extending the language and bringing it closer to your problem domain.

    To do this your language needs to be built for extensibility, allowing your programmers as many degrees of freedom as possible. This builds on the Lisp tradition which Martin Fowler points out in a article he references:

    http://www.paulgraham.com/progbot.html

    When you are building up DSL's in this way you are programming bottom up, and creating a domain specific language better suited to your problem domain. In the same article by Alan Kay he went on to talk about the difference between programming in the DSL and programming the DSL.

    Programming in the DSL is domain specific. A DSL should be easier to use than a general purpose programming language. So easy in fact that Alan Kay hoped that end users and children could program in DSL's.

    Programming the DSL is a whole different ball game. It requires knowledge of the full underlying language and an ability to design a domain specific language suited to the problem domain.

    This approach is classified as internal DSL's by Martin Fowler. One of the down sides of internal DSL's that Martin Fowler identifies in his article is the fact that lay programmers are burdened with the full power of the underlying language. So the degrees of freedom that were a bonus when programming the DSL becomes a burden when programming in it.

    Alan Kay makes the same point and saw this as a weakness in Smalltalk. In an article he says that he wished that there was a light that came on to let programmers know when they where moving from programming in the DSL's into programing the DSL's. In Smalltalk this transition is silent.

    Language work benches and the idea of editors where views of an abstract representation of the program is projected to the programmer provide a possible answer to this problem. Lay programmers can get a projection of the DSL's without the ability to modify the DSL's themselves. More advance programmers would get the full power.

    The long term goal of language work benches is to get domain experts to program themselves, but just making programming simpler for most programmers is a significant achievement in itself.

    This is why I feel that the "C" mind set has been holding us back. Alan Kay was considering these ideas over 30 years ago, and they are still being researched today in Language Work Benches. In fact some of the features of language work benches were available in Smalltalk 30 years ago, allbeit in limited form. Imagine if we had spent the last 30 years building on these ideas, where would we be today?

    So the real issue for me is how did we get ourselves into this situation? What is the lesson to be learnt from history? The best ideas are out there, we just seem to have a canny knack of avoiding them.

    Paul.
  175. The long term goal of language work benches is to get domain experts to program themselves...

    Why? I actually think the situations where users should be programming are fairly limited. Writing reliable software requires a certain logical mindset.

    Most people, at least in my experience, simplify problems by ignoring edge conditions (among other things, and I'm using "edge condition" in a very loose sense to mean "uncommon condition"). This is generally ok when a person is thinking about how a person is going to solve a problem, because the person is usually smart enough to know when an edge condition is encountered and come up with a reasonable way of handling it.

    Computers aren't so lucky. They need to be explicitly told how to detect edge conditions, and how to handle them when they are encountered.

    Consequently, the vast majority of users, even technically inclined ones, simply aren't capable of writing software suitable for multi-user environments where reliability is in the least bit important. Now, the same could be said for a significant portion of programmers, because many people become programmers without ever really developing the analytical skills necessary to write quality software.

    I think the goal is to make software engineering less tedius, not less difficult.
  176. I think the goal is to make software engineering less tedius, not less difficult.
    Yes, I agree. The boundary between the program and the DSL is an artificial one. The only way to determine what that boundary should be is to iteratively refactor your program/DSL as you build it. It may take sometime or even several applications before the interface betwen the DSL and the rest of the program becomes stable.

    So building a DSL is not an easy thing IMO. And to build it you need to use it. I can see situations where it helps though. If the DSL designer and the DSL user work closely together in the same team, then the less skilled programmer can become more productive on the back of the higher skilled person.

    I see this happing in our team. The experienced developers build most of the system plumbing (DSL's) whilst the less experienced tend to stick to application code.

    Paul.
  177. As Martin Fowler describes, there have been several approaches to this including the "little languages" approach common to most Unix people (awk, sed etc).
    When announced on TSS last year, I read the papers about the "little languages" research collaboration between IntelliJ and Fowler. What I found most striking was that their explicit condemnation of visual language was accompanied by their promising research into the arbitrary nesting of visual blocks of text. It was clear to me that this research collaboration was exploring the possibility and laying the foundation of visual language. Yet they adamantly denied it.
  178. It sounds as though what you did wasn't MDA in the standardised OMG sense. It sounds more akin to what you could achieve in Unix using parsers like yacc to do code generaton. Martin fowler talks about these, he calls them external DSL's.
    The BridgePoint tool then and now is an immersive IDE. State diagrams can be single-step animated and the action language interpreted -- all before code generation begins. This is what's known as an executable model, the highest breed of model compilation, which OMG's MDA Technical Perspective classifies as the fourth and most sophisticated level of model translation techniques.
  179. It sounds as though what you did wasn't MDA in the standardised OMG sense. It sounds more akin to what you could achieve in Unix using parsers like yacc to do code generaton. Martin fowler talks about these, he calls them external DSL's.
    The BridgePoint tool then and now is an immersive IDE. State diagrams can be single-step animated and the action language interpreted -- all before code generation begins. This is what's known as an executable model, the highest breed of model compilation, which OMG's MDA Technical Perspective classifies as the fourth and most sophisticated level of model translation techniques.
    From how you described it. The difference between what you did and MDA is that your domain specific language was evolved in unison with the end program. I am quite happy with graphical DSL's. The problem I see with MDA is the attempt to use UML 2.x as a general purpose programming language.

    This is very different from what you said you actually did. An off the shelf DSL/generator/PSM etc is a very different proposition from an in-house tool (external DSL) generated in close communication with the people that will be using it. Please refer to my previous post. I make this point about MDA in several of them.

    Regards,

    Paul.
  180. Hi Brian,

    Took a look at the tool you mention:

    http://www.acceleratedtechnology.com/embedded/nuc_bridgepoint.html

    I'm surprised that you've been so shy about mentioning it before.

    In the early nineties I spent 4-5 years writing embedded software for GSM mobile phones. Anyone who knows anything about GSM will know that you cannot get more real-time then that. If BridgePoint was available then, I would have gone know where near it.

    This is my point. The term "domain specific" means just that. In the case of this tool the domain for which it was designed was the environment (people, organisational culture, performance and power requirements, application requirements etc.) of the original RTOS/C++ in-house applications you mention. The "sweet spot" of this tool is consequentially “narrow”.

    For real-time GSM there are two overriding constraints. Real-time must mean "real-time" (down to microseconds). If you miss your paging slot then the phone will drop the network. And the other constraint is battery life, i.e. power consumption. So the code must do as little as possible, and hibernate the CPU when ever it can.

    As a commercial product, BridgePoint has to attract a sufficiently "wide" audience to gain commercial success. So as I've mentioned before and have experienced several times, these types of products are over sold to projects where the requirements do not match the tools "sweet spot".

    The way they are sold is always the same. They are sold to Managers (often before developers are hired) on the basis that they will "dumb down" development and save money.

    The developers then find themselves stuck with a poor technology choice that doesn't quite fit the problem. They spend 10% of their time playing with the lovely GUI, and the remaining 90% of their time messing about with "marks" and "translation rules" and "action language" that looks remarkably like code to me.

    I don't see how "domain experts" could possibly be expected to program in "action language" never mind deal with "translation rules" and "marks".

    In the end you hit a hard requirement that you just cannot achieve with your tool (despite reading the voluminous manual, going on the training course, numerous support calls, and finally buying-in the specialist consultant from the same company). So you’re back to general purpose programming and tweaking the generated code, with all that this entails...or modifying the tool yourself to generate what you need (assuming that the vendor will let you have the source code of course). Easier then?

    So what we have here is a great tool that served a good purpose in a limited environment. This is very different from MDA, and the vision that you have been touting.

    Regards,

    Paul.
  181. Approaches like MDA focus too much I think on the process of code production rather than on the quality of the resultant code itself (IMHO).
    If by "quality" you mean runtime optimality, then please note that Moore's Law and Amdahl's Law devalue runtime. Whereas developers seem always to get more costly. Ie, trends favor MDA. Or maybe by "quality" you mean absence of deliverable defects, something code generation achieves with superhuman precision. One of the reasons we use javac to generate bytecode is since hand bytecoding is so error prone. In this sense quality and the elimination of human work tend to be linked.

    Hi Brian,

    You've missed the point. By quality I mean "fitness for purpose". You can't tell whether a bunch of boxes and lines on a screen (or piece of paper) will do what you think it will.

    The proof of the pudding is in the execution and you can't execute UML. Its that simple (think about it).
  182. The proof of the pudding is in the execution and you can't execute UML. Its that simple (think about it).

    Just had an idea. What you need to tell your MDA (CASE) tool vendor the next time he comes round is that you want the tool to execute your model.

    You want to be able to set up pre-conditions, execute a use case and verify post conditions on the model, all within the tool. If he's honest he'll say that I can give you one of those for free. Here it is an executable model generator, most people call it an IDE. The code IS the model :^)
  183. You can't tell whether a bunch of boxes and lines on a screen (or piece of paper) will do what you think it will.The proof of the pudding is in the execution and you can't execute UML.
    You can't execute Java source code either (ignoring beanshell). Batch translation is integral to traditional Java development, so it's no different than MDA. Also, you impugn the readability of UML, and this is totally unfounded. UML is much more readable than Java. I've used the Bridgepoint model compiler that offered simulation, which let me watch an animation of the state diagrams prior to ever generating code. This let me catch sequence errors and race conditions without ever reading the C++ that it would eventually generate.
  184. You can't tell whether a bunch of boxes and lines on a screen (or piece of paper) will do what you think it will.The proof of the pudding is in the execution and you can't execute UML.
    You can't execute Java source code either (ignoring beanshell). Batch translation is integral to traditional Java development, so it's no different than MDA. Also, you impugn the readability of UML, and this is totally unfounded. UML is much more readable than Java. I've used the Bridgepoint model compiler that offered simulation, which let me watch an animation of the state diagrams prior to ever generating code. This let me catch sequence errors and race conditions without ever reading the C++ that it would eventually generate.

    I've looked on the web, and what you say seems to be correct. "Round trip engineering" has moved on quite alot. One of the things driving a move to more dynamic languages is the need for agility. For agile development, design is incremental, discovered through incremental iteratives changes and concrete feedback (evolution, no Big Upfront Design). For this to be practicable the build test cycle needs to be short. For many languages (including Java with an incremental compiler), build-test can take seconds.

    How long does this take using MDA? Is programming in your CASE tool more efficient than using an IDE? I still fail to see the advantage.
  185. <quote>
    How long does this take using MDA? Is programming in your CASE tool more efficient than using an IDE? I still fail to see the advantage.
    </quote>

    It depends on your PC power :-) It can also be done in seconds if you have a fast PC. In Open Source area for E-MDA (I'm specialized in Open Source MDA) you still need two different environments: a UML modeler and an IDE. The most important part: always think that your *model* is your *source code*!

    Following artikel series shows how this development process can look like (it uses AndroMDA for Model-Text/Model-Code transformations):
    http://www.jaxmagazine.com/itr/news/psecom,id,21908,nodeid,146.html

    Cheers,
    Lofi.
  186. <quote>
    This points to another difference. The code IS my design. UML is not my design, unless ofcourse I use my CASE tool as a programming environment, in which case "UML+actions" is my code and my design (but who would want to do this? I've done it and its terrible). So the only advantage to having lots of UML lying around is documentation.
    </quote>

    Actually you have to say: The *model* is always my design since your Java code is also a model! Only it uses *text* as its representation. And yes you can have an XML representation of Java code, see: JavaML - http://www.badros.com/greg/JavaML

    And also the idea of o:XML which Brian pointed out has shown this capability. This is one point that makes e.g. UML, OCL, AS, MDA, QVT also very interesting: they separate the abstract syntax from its concrete syntax of the language in their specifications. See the article from Mr. Fowler below to see some examples of abstract ("Java model") and concrete syntax ("Java text, JavaML") and why they are important.

    If you can see this as an important value, then you will see that UML, MDA, AS, OCL, QVT go in the right direction to help us higher the abstraction level.

    Anyway as I said in 2003 in general MDA is working top-down and Java bottom-up and they will meet in the middle :-) Please see this *very old* thread, which still has its own value:
    http://www.theserverside.com/news/thread.tss?thread_id=20314#88933

    <quote>
    BTW can you explain what a DSL is? Every time I create a new class and add it to my code base, am I not creating a Domain Specific Language? My new abstraction is specific to my application domain. Is DSL another way of saying: "use a langauage that has the right libraries"?
    </quote>

    yes, you are correct. In certain level it is DSL. This is a very good article about DSL (inkl. MDA, etc.) from Mr. Fowler:
    http://www.martinfowler.com/articles/languageWorkbench.html

    Cheers,
    Lofi.
  187. Recap: Unifiied Modeling LANGUAGE.[ Go to top ]

    Back in October in this thread, Bruce said:
    Abstractions will inherently rise up over time. The point in Beyond Java is that this is just the sort of catalyst that could cause a new langauge to emerge.
    His claim evoked 156 responses and has for the past two weeks monopolized this possibly record-breaking thread.

    By being a more abstract language, UML well suits the astoundingly popular concern Bruce voiced.
  188. Recap: Unifiied Modeling LANGUAGE.[ Go to top ]

    Back in October in this thread, Bruce said:
    Abstractions will inherently rise up over time. The point in Beyond Java is that this is just the sort of catalyst that could cause a new langauge to emerge.
    His claim evoked 156 responses and has for the past two weeks monopolized this possibly record-breaking thread.By being a more abstract language, UML well suits the astoundingly popular concern Bruce voiced.

    UML isn't any more abstract than a dynamic language. Heck, it isn't any more abstract than Java or C++.

    The question isn't "How abstract is the language?" The question is "How easy is it to build abstractions using the language?"
  189. Recap: Unifiied Modeling LANGUAGE.[ Go to top ]

    UML isn't any more abstract than a dynamic language. Heck, it isn't any more abstract than Java or C++.
    Take memory semantics as an example dimension of abstraction. C++ explicitly differentiates between stack allocation and heap allocation, and a hacker assumes the cognitive load of tracking this for every piece of memory he allocates. Ie, C++ ain't abstract.

    Java is a little better, but still the hacker can be cognitively burdened with finalizers or the java.lang.ref package. So the abstraction is still a wee leacky.

    Without modification an abstract formal model can be projected into C++ or Java. Maybe the developer neither knows nor cares whether his target architecture uses Java or C++. So the developer has been completely shielded from the target's memory semantics. Ie, UML abstracts away Java and C++ and is thus more abstract than either.

    You mentioned dynamic language. Take JavaScript as an example. Maybe a web client can be alternatively coded as either AJAX or applet. If an abstract formal model can without modification be projected into either AJAX or applet, then the whole matter of dynamic vs static target language has been abstracted away. The developer is shielded from complexities such as class-based vs prototype-based. Ie, UML is more abstract than dynamic language.
    The question isn't "How abstract is the language?" The question is "How easy is it to build abstractions using the language?"
    No. In the postmodern programing era, the question is how easy it is to reuse canned abstractions from off the shelf. Model projection's target architecture pluggability shines at it. Canned abstract formal domain models can be grabbed of the shelf, mixed in to an application, and the application can then be arbitrarily retargeted to various canned architectures. Or the developer can simply go with a single target architecture and craft custom domain models to suit his business. The flexibility is compelling.
  190. Recap: Unifiied Modeling LANGUAGE.[ Go to top ]

    I said:
    The question isn't "How abstract is the language?" The question is "How easy is it to build abstractions using the language?"
    You said:
    No. In the postmodern programing era, the question is how easy it is to reuse canned abstractions from off the shelf. Model projection's target architecture pluggability shines at it.

    I think we have a fundamental disagreement here. In my experience, the primary challenge in building and enhancing a system is optimizing the model (using Paul's definition of model, not yours) and business logic. Writing code for the lower levels certainly takes up time, and it would be nice to reduce it, but it's fairly bounded and generally not a source of significant project risk.

    Your reasoning would say that eliminating the effort needed to generate the plumbing allows more effort to be expended on analysis, and reduced cycle time allows more iterations centered around full working software.

    If you're talking about building the typical CRUD application, you're absolutely correct - although I think there are better ways to achieve similar gains than MDA.

    However, if your application does any sort of analytics or data mining or non-trivial workflow, it hardly does any good. Why? Because the software engineer needs to be able to build abstractions in order to overcome apparent impedence mismatches not only vertically between the developer and the machine, but veritically between the "simple" functionality and "advanced" functionality, and horizontally among the funtionalities focused on various stakeholders.

    Increasing abstraction accelerates software development. Accelerating the inrease of abstraction accelerates the acceleration of software development. Therefore, a language that makes building abstractions easier will asymptotically improve the cost of developing a piece of software relative to it's complexity, while pre-canned abstraction will at best reduce effort by a constant multiplier, and frequently only by a constant because the pre-packaged constructs only benefit a small chunk of the system.
    Canned abstract formal domain models can be grabbed of the shelf, mixed in to an application, and the application can then be arbitrarily retargeted to various canned architectures

    Won't work. I've seen plenty of "canned" domain models. They're always somewhre between worthless and harmful. I don't know why ISVs think companies have no desire what-so-ever to achieve a competitive advantage. Well, on second thought, they do see it, they just like to lie and say you can buy long term competitiveness. You can't, because anything you can buy, so can your competitors.

    Ok, I'm ranting now. Time to get back to work. I have ISVs that need their feet held to the fire...
  191. Recap: Unifiied Modeling LANGUAGE.[ Go to top ]

    Your reasoning would say that eliminating the effort needed to generate the plumbing allows more effort to be expended on analysis, and reduced cycle time allows more iterations centered around full working software. ... However, if your application does any sort of analytics or data mining or non-trivial workflow, it hardly does any good. Why? Because the software engineer needs to be able to build abstractions in order to overcome apparent impedence mismatches not only vertically between the developer and the machine, but veritically between the "simple" functionality and "advanced" functionality, and horizontally among the funtionalities focused on various stakeholders.
    This seems to be the crux of your apologetics for handcode. Your last sentence is 45 words long, including 3 verbs (needs,build,overcome)! I assume that sentence is your thesis statement. If you want me to respond to it, then I need to understand it. Could you rephrase?
  192. Recap: Unifiied Modeling LANGUAGE.[ Go to top ]

    Your last sentence is 45 words long, including 3 verbs (needs,build,overcome)! I assume that sentence is your thesis statement. If you want me to respond to it, then I need to understand it. Could you rephrase?

    I'm basically saying that the abstractness of something is relative to the concern you are addressing. This applies to both abstracting the machine away from the programmer and allowing the programmer to create abstractions that fit the needs of various stakeholders.

    Let's say your developing a procurment system the will have three primary organizational stakeholders: procurement, engineering, and manufacturing.

    All three are concerned with the processing of purchase orders, but have very different views of them. Procurement obviously needs all the dirty details of the order itself, but probably isn't that concerned with where the stuff it's buying is going to be used. Engineering mostly cares about purchase order from a historical perspective: how much does a give part cost, what is it's lead time? (so that it can design widgets that can be built cheaply and quickly) Manufacturing is primarily concerned with how delivery of the ordered parts is going to fit in with the shop floor schedule (since obviously you need the parts before you can build it).

    In the scenario above, the different stakeholders need different abstractions. The abstraction for the engineering organization is probably just layered on top of the model for the procurement organization and is very simple in its interface, but possibly complex in its implementation because it's converting a pile of data into something simple. The one of manufacturing has to be built on top of both the procurement model and the model for the shop floor control system.
    This seems to be the crux of your apologetics for handcode.

    Why would I apoligize for handcode? Hopefully I'll get the time this weekend to work out a demonstration of how metaprogramming can make Python programming more abstract, efficient (in terms of developer time), and flexible than UML. Maybe that will convince you....but probably not.

    But I think others would find it interesting.
  193. I'm basically saying that the abstractness of something is relative to the concern you are addressing. This applies to both abstracting the machine away from the programmer and allowing the programmer to create abstractions that fit the needs of various stakeholders.
    +1
    Again Erik you are spot on. In the past I used a modelling approach where the second concern (stakeholders) was covered by the essential "analysis" model and the subject facing "design" model. The first con"cern you high light was covered by the solution facing "design model, where technical issues like the chosen programming language was addressed.

    Back in structured programming days, these models were very different (remember data flow diagrams and structure charts?). But with the advent of object orientation, these models are essentially the same with increasing levels of elaboration to make them address the implementation technology. So the purpose of the solution facing design model is to satisfy the stakeholders requirements AND meet the technical constraints of the implementation technology. Now where have I heard this before:
    For in model-driven design, the design is not only based on the model, a model is also chosen to suit design and implementation considerations. These two constraints are solved simultaneously through iteration. This reflects, in part, practical limits of our particular hardware or our particular programming languages or infrastructure.

    So we are back to DDD. One model, no PIM and PSM just one model. And before Brian pipes up and suggests that MDA will do the translation for you, even Agile MDA insists on just one model. So depending on your purpose you may be interested in "viewing" various levels of detail in your model. If your purpose is to test whether the model will execute correctly then you will want to see all the details. The most detailed representation of your model is the CODE
     

    People can code in UML if they want to, but we both agree that there are better programming alternatives :^).

    Paul.
  194. In-language DSL[ Go to top ]

    I hope to dispell two myths you've been pertuating:
    1. DSLs require special compilers or translators
    2. C++ is not an abstract language

    Check out GiNaC, a symbolic math library for C++ (with available Python bindings). It uses OOP w/operator overloading to allow algebraic expressions to be written and manipulated naturally in C++.

    http://www.ginac.de

    A page with quick examples:
    http://www.ginac.de/tutorial/How-to-use-it-from-within-C_002b_002b.html#How-to-use-it-from-within-C_002b_002b

    In other words, it provides a symbolic math DSL, right there inside C++, w/o any precompilers or other magic. It's just a library, yet by using language features that enable the programmer to effectively build increasing levels of abstraction and then interact with them as natureally as with native language features.
  195. In-language DSL[ Go to top ]


        for (int i=0; i<3; ++i)
                 poly += factorial(i+16)*pow(x,i)*pow(y,2-i);

    Looks pretty domain specific to me! Lets see if I can work out the language:

    polynomial
    factorial and
    power of

    Could it Maths speak?

    Brian, Try expressing that in UML :^)

    Paul.
  196. In-language DSL favorites[ Go to top ]

    Hi All,

    Not wanting to flaunt my "hand-coding" credentials, but I thought it would be fun to post a couple of internal DSL favorites of mine.

    First Java. This is a TDD test expectation using the JMock framework. Incidently JMock also uses cglib and dynamic proxies as I described earlier:
    aServiceMock.expects(once()).method("getSuspectItems").with(eq(reportId)).will(returnValue(items));

    And of course it wouldn't be complete without a Smalltalk example. This example uses the GLORP ORM framework to read back an object from a relational database:
    foundPerson := session readOneOf: Person where: [:each | each firstName = 'Jose'].

    I'm sure neither of these examples require further explanation.

    BTW. For those of you out there who are familiar with Hiberante, then checkout GLORP:

    http://www.eli.sdsu.edu/SmalltalkDocs/GlorpTutorial.pdf

    ORM without XML, HQL or bytecode manipulation. Mappings, querys etc all done using an in-language DSL.

    Paul.
  197. In-language DSL[ Go to top ]

    It uses OOP w/operator overloading ...
    In other words, it provides a symbolic math DSL ...
    That ain't a DSL. A DSL has a novel grammar. Your math example doesn't even impose new semantics upon C++. Your understanding of DSL theory is obviously half baked.
  198. In-language DSL[ Go to top ]

    It uses OOP w/operator overloading ...In other words, it provides a symbolic math DSL ...
    That ain't a DSL. A DSL has a novel grammar. Your math example doesn't even impose new semantics upon C++. Your understanding of DSL theory is obviously half baked.
    Hi Brian,

    I must say that you are beginning to get on my nerves. So please tell us all, what is an "internal DSL"? And of course I'm sure you will supply links to support your definition.

    Paul.
  199. In-language DSL[ Go to top ]

    So please tell us all, what is an "internal DSL"?
    I tried. I honestly tried to make sense of Fowler's explanation of internal DSLs. But his explanation didn't jive with his example. Here Fowler explains it:
    Internal DSLs are limited by the syntax and structure of your base language.
    His example violates the syntax constraint I cited above. The example internal DSL has colon characters without corresponding question mark characters. So his DSL ain't limited by Java syntax. Maybe Fowler botched his explanation when he said "limited by the syntax of your base language".
  200. In-language DSL[ Go to top ]

    So please tell us all, what is an "internal DSL"?
    I tried. I honestly tried to make sense of Fowler's explanation of internal DSLs. But his explanation didn't jive with his example. Here Fowler explains it:
    Internal DSLs are limited by the syntax and structure of your base language.
    His example violates the syntax constraint I cited above. The example internal DSL has colon characters without corresponding question mark characters. So his DSL ain't limited by Java syntax. Maybe Fowler botched his explanation when he said "limited by the syntax of your base language".
  201. In-language DSL[ Go to top ]

    Internal DSLs are limited by the syntax and structure of your base language.
    Makes perfect sense to me. "Jived" with me straight away, and Erik too. What Fowler is saying is that the syntax in your internal DSL is limited to the syntax of your base language. So for base languages without operator overloading you can't use operators in your DSL - its that simple.

    It seems to me that you owe both Erik and I an apology. To be honest I do not feel you are actually adding to this discussion. Most people have left probably because they are sick and tired of hearing about MDA.

    There are lots of ideas that we could discuss, I've mentioned several. In fact I think I've had enough of MDA too.

    Haven't you got a view on anything else?

    Paul.
  202. In-language DSL[ Go to top ]

    So for base languages without operator overloading you can't use operators in your DSL - its that simple.
    Apparently it ain't that simple. Look at Fowler's example internal DSL that I cited. It redefines the meaning of the colon operator, as I already noted. And yet in Java, the host language Fowler explicitly gave for his example internal DSL, operator overloading is lacking. So you're wrong on this point.
    Haven't you got a view on anything else?Paul.
    I chatted here about MDA, DSL, dynamic typing, and industry structural trends. What haven't I covered?
  203. In-language DSL[ Go to top ]

    I chatted here about MDA, DSL, dynamic typing, and industry structural trends. What haven't I covered?
    Yes, all in relation to MDA - and exhibiting your ignorance of everything else in the process.

    Go on be nice, and go away quitely...

    Paul.
  204. Fowler's Example[ Go to top ]

    From: http://www.martinfowler.com/articles/languageWorkbench.html#InternalDsl
    (As a comparison, it would be interesting to see my more complex example developed in one of these dynamic languages. I probably won't get around to it, but I suspect someone else might, in which case I'll update the further reading.)

    The quote above has the following link embedded in it on Fowler's website:
    http://www.martinfowler.com/articles/mpsAgree.html

    He is linking to an example of an External DSL and saying it would be iteresting is someone implemented it as an Internal DSL using a dynamic language.

    His example is an External DSL.

    Brian - you need to read more carefully.
  205. Fowler's Example[ Go to top ]

    His example is an External DSL.Brian - you need to read more carefully.
    Oh. I see you're correct. Till now Fowler's literature wasn't clear to me, especially since his example external DSL was hyperlinked from his explanation of internal DSLs. I regret any flailing on my part about this.

    Paul noted:
    In his paper Fowler only provides examples of external DSLs from what I remember.
    I see now your correct about this. And this raises a serious question. Since Fowler avoids providing an example internal DSL, do you think that what is to come in the future beyond Java are internal DSLs, or merely more external DSLs? Ie, do you think Fowler's internal DSL idea will get traction?
  206. External DSLs[ Go to top ]

    I see now your correct about this. And this raises a serious question. Since Fowler avoids providing an example internal DSL, do you think that what is to come in the future beyond Java are internal DSLs, or merely more external DSLs? Ie, do you think Fowler's internal DSL idea will get traction?

    I think it will, but I don't think most people will call them internal DSLs. They'll call them libraries and/or frameworks. Really, I'd call the "Internal DSL" concept more a guideline that for library/framework design than a standalone concept. It says: Libraries/frameworks should make the programming language and the domain language blend together.

    For example, the math library I referenced before use C++'s operator overloading capabilities to allow the using programmer to build expression trees using a natural syntax. It extends the semantics of C++. Admittedly, given that math is generally a top-tier concern in programming languages, citing a math library as a Internal DSL, while correct, is a bit of a cop-out.

    From a language design standpoint, I think this means that future languages need to consider the ability to extend language semantics a requirements. Java does the exact opposite of this, largely because many people create a mess when they try to extend language semantics, but I think that will have to change - probably by putting a new language on top of the JVM.
  207. Internal DSLs[ Go to top ]

    Really, I'd call the "Internal DSL" concept more a guideline that for library/framework design than a standalone concept.
    Everyone here talks about internal DSLs as an elusive abstraction. This is Fowler's fault. His presentation (possibly even his understanding) of internal DSLs is underwhelming. He declines to give an example internal DSL or cite an existing one. The impression we get from this is that internal DSLs aren't a first class concept, as you point out. At first I accepted this. But JavaDoc's an internal DSL.

    The best insight on internal DSLs is Klang's XML and the art of code maintenance. He presents a metaprogramming framework with superb support for arbitrary internal DSLs. Klang's example internal DSLs (remember Fowler gave none) include ones for design by contract, unit testing, and documentation.
    From a language design standpoint, I think this means that future languages need to consider the ability to extend language semantics a requirements.
    Klang's metaprogramming platform satisfies this entirely, or in his words:
    The programming language no longer defines the limits of development work in the types and variety of constructs it can incorporate, instead it provides an open structure to build upon.
  208. Compound documents, Internal DSLs[ Go to top ]

    Klang's example internal DSLs (remember Fowler gave none) include ones for design by contract, unit testing, and documentation.
    Oops, I missed another. In his paper he also shows an internal DSL for aspects. Klang has uncorked a jini.

    JavaScript is an internal DSL nested within HTML. Ie, internal DSL already comprises the bulk of behaviors transmitted on The Web.

    A document has an internal DSL if the document contains multiple coding languages. This is what W3C dubbed a compound document. Ie, a document with multiple XML namespaces. XML has had namespaces since 1999, so internal DSLs have been scientific at least that long ago. Fowler seems a late comer.

    IBM has an Eclipse plugin for model driven compound documents.
  209. Compound documents, Internal DSLs[ Go to top ]

    Hi Brian,

    Fowlers use of the term internal DSL has more to to with building re-usable abstractions using your general purpose programming language. Generally, internal DSL's are more commonly known as libraries, API's and Frameworks. With clever coding and the appropriate language support you can make your API look like a domain specific language itself. The two examples I gave previously are good examples of this:
        aServiceMock.expects(once()).method("getSuspectItems").with(eq(reportId)).will(returnValue(items));
    ...
    foundPerson := session readOneOf: Person where: [:each | each firstName = 'Jose'].

    Now the extent to which you can do this stuff depends on your base language, and the degrees of freedom it allows. Languages that support deep reflectiveness at run-time allow you to generate code "on-the-fly", thus adding increased expressiveness to your internal DSL. For example Rails allows this with it's ActiveRecords. Rails use meta-data from the database to create domain objects on the fly from database tables. No code generation in the normal sense, but when you send a message (e.g name) to an ActiveRecord it looks up the appropriate field from the database record (e.g NAME). The name of the ActiveRecord class is used to infer the database table so class "Person" links to database table "PEOPLE".

    The feature of dynamic languages that facilitates this type of stuff is called a Metaclass (the class of a class). Static languages like Java do not support Metaclasses and only provide limited "Metaclass" type support through the use of the factory pattern.

    Now XML is static, so you can specify meta-data, but there is no equivalent to a Metaclass, so no dynamic "Metaclass-behaviour". So you cannot determine how an instance of a class (an object) behaves at run-time using XML, unless you process your XML at run-time ("Dynamic XML") or you use your XML to generate a dynamic language like Ruby. JavaScript is dynamic, but I am yet to see it used to do metaprogramming.

    In my experience XML has typically been used as an external DSL. A good example of this is the ORM Mappings in Hibernate. Because Java does not have MetaClasses, Hibernate uses bytecode generation to change the behaviour of domain classes at run-time. To do this it needs to know what behaviour is required. Instead of infering relationships by the use of a naming convention and reading Meta-data from the database schema (like Rails and ActiveRecords), Hibernate uses an external DSL in the form of an XML mapping file.

    Like I said before, Fowler made these terms up (internal/external DSL). I do fine them useful in discussing concepts that have been around for a long time, but for which there wasn't a consistent volcabulary previously. If you have not used dynamic languages much I can see how these terms could be confusing.

    Paul.
  210. Compound documents, Internal DSLs[ Go to top ]

    Generally, internal DSL's are more commonly known as libraries, API's and Frameworks.
    I don't recall Fowler ever claiming this. Can you give a cite?
    In my experience XML has typically been used as an external DSL.
    *yawn*
    External DSLs are a tired, ancient concept and undeserving of further theoretical research. I'm gaining an interest in internal DSLs. Internal DSLs have been most successful in markup documents, such as HTML and XML. No one has advanced the theory of internal DSLs as much as Klang. His paper hailed the supremacy of XML for internal DSL metaprogramming. W3C's CDF workgroup and specification is standardizing a contract to integrate compound documents and internal DSLs. I assume this is humanity's only standards effort regarding internal DSLs as a generalized abstraction. The writing's on the wall: XML will drive the evolution of internal DSLs.
    If you have not used dynamic languages much I can see how these terms could be confusing.
    Um, I professionally coded Perl-5 and JavaScript, and I'm currently the core developer of a commercial Jython IDE. I've also briefly investigated Ruby's amazing DSL capabilities. Invalidating my perspective on internal DSLs takes more effort than you've given.
  211. Compound documents, Internal DSLs[ Go to top ]

    His paper hailed the supremacy of XML for internal DSL metaprogramming.

    I'd rather work in a graphical tool than type all those darn angle brackets, and I'd rather type in a clean grammar than work in a graphical tool.

    XML was born because most people are either too lazy or too stupid to write parsers. The lazy part is probably a good thing, and the stupidity part explains all the ridiculous usages of XML.

    (I'm not bashing XML, it just let's children run with scissors, which it's XML's fault)
  212. oops[ Go to top ]

    (I'm not bashing XML, it just let's children run with scissors, which it's XML's fault)

    (I'm not bashing XML, it just let's children run with scissors, which isn't XML's fault)
  213. Compound documents, Internal DSLs[ Go to top ]

    Invalidating my perspective on internal DSLs takes more effort than you've given.
    Hi Brian,

    No one is trying to invalidate you or your perspective at all. The ideas I've expressed are inpersonal, they do not belong to me, and they are not intended to knock you.

    Perhaps your comment above explains why we've spent so much time going in circles :^)

    I'm sorry you took what I felt to be an informative post the way you did.

    Regards,

    Paul.
  214. Compound documents, Internal DSLs[ Go to top ]

    JavaScript is an internal DSL nested within HTML. Ie, internal DSL already comprises the bulk of behaviors transmitted on The Web.

    I think JavaScript is an embedded DSL (I just made that up)...and I'm not really sure it qualifies as a DSL. It's more like a general purpose scripting language that's for the most part only used for one purpose.
    Ie, a document with multiple XML namespaces. XML has had namespaces since 1999, so internal DSLs have been scientific at least that long ago.

    I don't think XML counts, because IMHO XML is not a language. It's a metagrammar (I made that up, too) without semantantics that can be used to create languages (using schemas to specify the grammar and interpreters to act on them) that can be effectively intermingled.

    So, in my opinion, Internal DSLs follow the grammar of the host language and extend its semantics. XML has no semantics so they can't be extended.

    But I agree that the idea of blending multiple languages into a single XML document is a powerful concept. I just hate typing out XML so I wouldn't want to program in it.
    Fowler seems a late comer.

    I think Fowler isn't trying to define new concept, but rather to put a new frame around existing concepts so that they can be better understood and more effectively applied.

    As Paul has repeatedly pointed out, there isn't that much new out there. "Modern" languages are still catching up to the likes of Smalltalk and LISP, which have been around for ages. The problem is, for one reason or another, the software development community has failed to grok some of the core concepts of these languages. Consequently, people like Fowler repackage it in attempts to bring the concepts to a wider audience.
  215. I don't think XML counts, because IMHO XML is not a language. It's a metagrammar (I made that up, too) without semantantics that can be used to create languages ... that can be effectively intermingled.
    Dude, what you say about XML immensely implies internal DSLs. Ant, XSL, and o:XML are processing languages with rich semantics. Hence XML is a first-class programming grammar.

    You say you don't like the angle brackets. That's semantically irrelevant, especially since XML can have alternate presentations, such as JSON and GroovyML.

    With InfoSet, DOM, XPath, XSL, and other essential machinery already standardized, this makes XML the absolutely easiest way to implement a new internal DSL or mix in existing internal DSLs. XML's momentum at internal DSLs is unbeatable. XML will surely drive the evolution of multi-language documents. XML already dominates this. Eg, aspect weaving and other source code transformations are unbelievably easy when the subject matter is expressed as XML. It only took Klang a flimsy little paper to demonstrate this.
    The problem is, for one reason or another, the software development community has failed to grok some of the core concepts of these languages.
    That's since custom parsing, graph navigation, and transformation are a nightmare with traditional languages. XML remedies this entirely.
  216. Dude, what you say about XML immensely implies internal DSLs. Ant, XSL, and o:XML are processing languages with rich semantics. Hence XML is a first-class programming grammar.

    Ant, XSL, and o:XML are all languages with rich semantics that conform the XML's (meta)grammar.
    With InfoSet, DOM, XPath, XSL, and other essential machinery already standardized, this makes XML the absolutely easiest way to implement a new internal DSL or mix in existing internal DSLs.

    I agree that it is the easiest. I just don't think it's the best, because a programming language should be a pleasure to use for programming. Typing XML is not a pleasure.

    All that being said, I think XML is a great format to be used for storage and intermediate processing. There's no reason why what the programmer sees on the screen has to be identical to what the computer stores on disk and what automated tools process - there just needs to be a 1:1 mapping between them.

    Again, I don't think XML is a language. Never base been, never will be. But it can be very effectively used for expressing languages.
  217. +1
    Agreed. But isn't there something more interesting we can discuss? What about my point on late-binding and performing AOP "on-the-fly" (objects wired together at run-time)?

    I really feel that this approach is much more powerful than compile-time annotations in Java, and will make Operating Systems like Croquet vastly superior to anything we have today.

    Paul.
  218. Agreed. But isn't there something more interesting we can discuss? What about my point on late-binding and performing AOP "on-the-fly" (objects wired together at run-time)?

    Yes, that is more interesting. I think I've figured out why thought-leaders are thought-leaders. It's not necessarily because they think more insightful thoughts, it's because they take the time to express them and come up with concrete examples.

    Me...I sit down and start implementing Fowler's example External DSL as an Internal DSL in Python, and an hour later my wife makes a remark about something in the house and I'm off to Lowes (big home improvement chain in the US).
  219. Agreed. But isn't there something more interesting we can discuss? ...
    .Me...I sit down and start implementing Fowler's example External DSL as an Internal DSL in Python, and an hour later my wife makes a remark about something in the house and I'm off to Lowes (big home improvement chain in the US).
    I guess there's more to life...:^)

    Looking forward to seeing your example in python though. Here is an article I came across regarding metaprogramming in Smalltalk (actually it is about adding Mixins into standard Smalltalk). It shows just how easy it is to extend the language. Even changing it's meta-architecture:
    http://www.ubilab.org/publications/print_versions/pdf/sch98-1.pdf


    I'm really sold on the idea of an Image and late-binding. Think of an Operating system that contained just objects (no "applications"). Each object can be explored an inspected. Also you can "script" objects together anyway you like, comming up with new "objects" that the original object authors never even dreamt off.

    Provide end users the ability to wire objects together and you've got an "end-user" extensible environment. They are experimenting with this in Croquet an the minute using a scripting language called TVML:

    http://jlombardi.blogspot.com/2004/12/talking-to-your-tv.html


    As if you didn't have enough to do :^) Enjoy.

    Paul.
  220. XML, Compound documents, Internal DSLs[ Go to top ]

    I'm really sold on the idea of an Image and late-binding. Think of an Operating system that contained just objects (no "applications"). Each object can be explored an inspected. Also you can "script" objects together anyway you like, comming up with new "objects" that the original object authors never even dreamt off.Provide end users the ability to wire objects together and you've got an "end-user" extensible environment.

    I used to be sold on the idea when I used Smalltalk in the late 80s and early 90s. The problem is that when it goes wrong it goes very, very wrong. In order to ensure that you can recover from this situation you have to put in a lot of work to allow the rebuilding of the images. Maybe I was just clumsy, but too often I found myself carefully filing-in change sets to try and get back my work.

    Java's class-files-in-directories approach with dynamic loading was a reaction to the issues that arose from the image approach.
  221. XML, Compound documents, Internal DSLs[ Go to top ]

    I'm really sold on the idea of an Image and late-binding
    I used to be sold on the idea when I used Smalltalk in the late 80s and early 90s. The problem is that when it goes wrong it goes very, very wrong.
    This as concerned me too. Having said this - operating systems today are built on a binary in-memory "image" and they are pretty stable.

    One of the problems I think Smalltalk has suffered from is that it never really shook off it's research roots. I'm sure that if a fraction of the energy (never mind expenditure) is spent "productising" Smalltalk as has been spent on Java/J2EE over the years such concerns would evaporate.

    As for the Croquet/Squeak implementation, the Image seems pretty stable so far given that Croquet is still pre-alpha. I'll guess that we will just have to wait and see.

    BTW. Have you had a chance to take a look at Croquet?

    Paul.
  222. XML, Compound documents, Internal DSLs[ Go to top ]

    I'm really sold on the idea of an Image and late-binding
    I used to be sold on the idea when I used Smalltalk in the late 80s and early 90s. The problem is that when it goes wrong it goes very, very wrong.
    This as concerned me too. Having said this - operating systems today are built on a binary in-memory "image" and they are pretty stable.One of the problems I think Smalltalk has suffered from is that it never really shook off it's research roots. I'm sure that if a fraction of the energy (never mind expenditure) is spent "productising" Smalltalk as has been spent on Java/J2EE over the years such concerns would evaporate.

    Sorry to keep disagreeing :)

    My memory of things is that Smalltalk did rather well in the early 90s, and there was a reasonable take-up of the language, especially for financial applications. I think it did take off, as was far more than just a research language - products like VisualWorks and VisualAge Smalltalk did were reasonably successful, and there are still companies making money from them. Also, Digitalk Smalltalk was pretty influential on personal computers for a few years.
    As for the Croquet/Squeak implementation, the Image seems pretty stable so far given that Croquet is still pre-alpha. I'll guess that we will just have to wait and see.BTW. Have you had a chance to take a look at Croquet?Paul.

    Ah! As you mentioned Alan Kay, I suspected that Squeak would be involved somehow! I haven't had a look at Croquet yet - too busy, but the website looks interesting. I'll see if I can set aside some time.
  223. XML, Compound documents, Internal DSLs[ Go to top ]

    I'm sure that if a fraction of the energy (never mind expenditure) is spent "productising" Smalltalk as has been spent on Java/J2EE over the years such concerns would evaporate.
    Sorry to keep disagreeing :)My memory of things is that Smalltalk did rather well in the early 90s
    A fair point. I guess every architecture has it's strengths and weaknesses. The issue then is will the benefits out way the disadvantages. Python and Ruby use the file based approach, I just can't see how you would turn them into an OS though.
    As for the Croquet/Squeak implementation, the Image seems pretty stable so far given that Croquet is still pre-alpha. I'll guess that we will just have to wait and see.BTW. Have you had a chance to take a look at Croquet?Paul.
    Ah! As you mentioned Alan Kay, I suspected that Squeak would be involved somehow! I haven't had a look at Croquet yet - too busy, but the website looks interesting. I'll see if I can set aside some time.
    Yeah, Alan Kay hasn't given up on the "Dyna-book" idea. The whole Squeak project has been leading up to this point. I'm pretty confident that Alan Kay and his team have thought through most eventualities and are ironing out all the wrinkles, but like you pointing out, the devil will be in the detail.

    Paul.
  224. Croquet - The next big thing...[ Go to top ]

    Hi All,

    Steves comments about the stability of the Smalltalk Image has got me thinking. Is the idea of an image fundamentaly flawed? Well I can think of a very successful Java Image based environment - Eclipse. Eclipse was inspired by the Smalltalk development environment. Eclipse does all it's fancy refactoring and reflective searches based on a project "image" it creates in memory from Java/class files imported from the file system. So the issue then is why is Eclipse so stable, and as Steve has pointed out, past Smalltalk environments have been somewhat brittle?

    Well the Eclipse environment is not part of the Eclipse image, so the environment itself cannot get corrupted. This is akin to system/user space in an OS like Unix. The core of the system is protected from errant user actions and programs.

    Smalltalk people have allways seen the fact that the development environment and the user programming space was one and the same thing as an advantage. I think it is the "Language Work bench" and "external DSL" argument all over again. Having the power to change everything does give you a lot of flexibility, but with power comes great responsibility, and sometimes programmers do need to be protected from themselves. Not every programmer should have or would want the power to modify all the internals of the system.

    A type of firewall between user and system space does seem like a prerequisite for Croquet if it is going to become a stable 3D NOS as reliable as say Unix.

    I wouldn't dream of lecturing Alan Kay and his team, and I'm sure they have got this issue on their radar. It is an interesting point though, I just thought I'd mention it.

    Paul.
  225. My memory of things is that Smalltalk did rather well in the early 90s, and there was a reasonable take-up of the language...
    In the early '90s my employer was using Smalltalk for its frontend. But then a meteoric rival struck from out of the blue, and the splash made by a new arrival was so huge that Smalltalk was doomed to commercial extinction by its own technological weakness. The newcomer was Java, and it elevated developer productivity and deliverable quality.

    The market's invisible hand was irresistable. Almost everyone who coded Smalltalk in the dark ages is today coding Java or has left the craft.
  226. The newcomer was Java, and it elevated developer productivity and deliverable quality.The market's invisible hand was irresistable. Almost everyone who coded Smalltalk in the dark ages is today coding Java or has left the craft.

    Hi Brian,

    So you rather leave your technical decisions to the marketing departments of software vendors... No wonder you're sold on MDA :^)

    As for me I'd rather make choices based on technical merits. Fortunately there are still many others (not in the business software community admittedly) that feel the same way.

    Croquet is gaining momentum, and they have a new demo video out. The video seems to target more of a "business software" audience. Personally, I preferred the laid back "matter-of-a-fact" academic presentation by Alan Kay. But as Brian rightly points out, for business people marketing counts - so this presentation has more of a "marketing bent".

    BTW. The questions from the audience aren't as intelligent either, but then again what do you expect from "business people" :^).

    http://am.mediasite.com/am/viewer/Viewer.aspx?layoutPrefix=LayoutLargeVideo&layoutOffset=Skins/Clean&width=881&height=648&peid=172f6de5-135b-4ba0-9207-ac6d383812c9&pid=fc503ef3-4a4e-44a6-b289-c915d8bf7bd3&pvid=502&mode=Default&shouldResize=false&playerType=WM64Lite

    Enjoy,

    Paul.
  227. My memory of things is that Smalltalk did rather well in the early 90s, and there was a reasonable take-up of the language...
    In the early '90s my employer was using Smalltalk for its frontend. But then a meteoric rival struck from out of the blue, and the splash made by a new arrival was so huge that Smalltalk was doomed to commercial extinction by its own technological weakness. The newcomer was Java, and it elevated developer productivity and deliverable quality.The market's invisible hand was irresistable. Almost everyone who coded Smalltalk in the dark ages is today coding Java or has left the craft.

    That isn't my memory of what happened. Firstly, Smalltalk simply hasn't suffered commercial extinction - it is not a dominant language, but is still used (there are far more Smalltalk jobs advertised out there than for Ruby on Rails!), and there are commercial vendors making money from it.

    Smalltalk has always had first-rate developer productivity and quality - that was never in doubt. Where Java succeeded was in providing a combination of cross-platform compatibility, vendor independence and performance... and it looked like C. The latter point is why it really took off, and outweighted all other advantages. It was a familiar language to millions of developers.

    However, I am sure that if the Smalltalk vendors had had some sense, and got together to provide an effective common and high-performance standard for the language, then Java would not have had such an easy ride.

    There are good reasons for Java's eventual success, and I now personally find it a highly productive language, but to claim that Java was 'meteoric' seems mistaken to me, and I simply can't understand how anyone can claim that early versions of Java 'elevated developer productivity and deliverable quality'. It took years for Java to turn into a widely accepted general purpose language - I can remember waiting for Java Workshop (an early Sun IDE) to ever-so-slowly open my project, and I can tell you that my thoughts were not about 'productivity' or 'quality'. They were somewhat less polite.
  228. Smalltalk vs Java[ Go to top ]

    I've been using Java from the beginning. I just knew that it would be a hit, simply because it looked like C, was much better than C++ and Sun was marketing the hell out of it. Besides it held the promise of becomming a new internet based distributed application platform.

    From the beginning I knew that Smalltalk was far superior to Java conceptually, having learnt 00 development using Smalltalk after giving on C++, but for the practical considerations Steve has pointed out I felt that Java would be a winner. Cross-platform, standard API's, free to download etc.

    I've learnt a lot since 96-97 and looking back, my feelings are very different. Prior to InteliJ and Eclipse, Java development was painful, has everyone forgotten class path hell? And as for Java as a platform, the Java/web browser combination faultered early on, and Application Servers have proven to be an expensive mistake. The Java as a platform has failed simply because Java lacks late binding, what remains are some pretty good standard services and API's such as JDBC and JTA upon which people have built some great frameworks like Hibernate and Spring. But the Jewels of Java/J2EE the Java enable Browser and the Application Server are more or less dead.

    The main strength of Java - standards, has also been the root cause of it's stagnation. Who would even consider using "standard" EJB2.1 persistence today?

    Premature standardisation for commercial reason kills innovation. Hence we are still building the types of Apps in Java today that people built in the 90's with CGI and Perl. What is needed is "freedoms of movement" that allow people to experiment with new ideas and for emergent standards to appear over time - rather than being dictated up front. Java has always lacked this "freedom of movement".

    I think Java is a product of the community that produced it, namely commercial busines software vendors. Such people lack vision and tend to be very conservative and short-term in their outlook. Look at the fuss over Y2K and the hype around 64bit CPU's and Web 2.0.

    If I had spent the last 10 years investing in Application Servers and Java Web Applications, I would be very dissapointed, and I wouldn't consider it value for money. Especially considering that I can put a 1970's super-computer on each employees desk for a few hundred dollars - yet my programmers can't build the software to really exploit it.

    Other sectors have been much better at exploiting technology then the business sector. Examples that come to mind are Telecommunications and Production Engineering. Yet today the business sector tend to determine the nature of general purpose software technology, I guess this is a fall back to the dominant days of IBM.

    I could say a lot more - but I think Alan Kay puts it very eloquently:

    http://acmqueue.com/modules.php?name=Content&pa=showpage&pid=273&page=1

    So who do we have to blame? IMO ourselves. With any luck next time round we will learn the lessons of history.

    Paul.
  229. SunLabs, universities, terse grammar[ Go to top ]

    Where Java succeeded was in providing a combination of cross-platform compatibility, vendor independence and performance, and it looked like C.
    Like I said, it's a meteoric splash of technological supremacy, for exactly the reasons you give. Thanks for making my point.
    The latter point [about grammar] is why it [Java] really took off...
    Thanks also for implying that SmallTalk's unwieldy grammar is productively inferior to C/Java's ergonomically terse grammar. Grammar matters so much for productivity -- not for less keystroking (no one does that anymore), but rather for visual logic density (ie, full utilization of screen real estate). Grammar is a visual aesthetic with profound productivity implications. Readability and density are paramount. SmallTalk blew it and the invisible hand swept SmallTalk away as trash. Indeed, Steve himself asserts that SmallTalk lost a grammar war.
    There are good reasons for Java's eventual success, and I now personally find it a highly productive language, but to claim that Java was 'meteoric' seems mistaken to me...
    Java went from debut to dominant university language in less than a decade. In the same time it also became the dominant commercial language.

    In your professional lifetime have you ever seen another language that could dominate both university and industry in less than a decade? The only other such language I know is possibly C, which I also consider a meteoric champion.

    SmallTalk was invented as a teaching language, yet Java dominates today's universities. Since SmallTalk couldn't retain it's homeland, what use is it? Industry doesn't want it.

    Sun was well aware of SmallTalk in the '80s, yet instead invested heavily in Self and Oak. The greatest JIT and GC geniuses were stolen by SunLabs and encouraged to kill SmallTalk, which they easily did. If you were once a SmallTalker and now a Java dork, than you can consider yourself a victim of this technologically justified coup.
  230. SunLabs, universities, terse grammar[ Go to top ]

    Where Java succeeded was in providing a combination of cross-platform compatibility, vendor independence and performance, and it looked like C.
    Like I said, it's a meteoric splash of technological supremacy, for exactly the reasons you give. Thanks for making my point.

    I was not making your point. Vendor independence and looking like C is nothing to do with technological supremacy.
    The latter point [about grammar] is why it [Java] really took off...
    Thanks also for implying that SmallTalk's unwieldy grammar is productively inferior to C/Java's ergonomically terse grammar.

    I wasn't. I think Smalltalk's (oh, and by the way it is 'Smalltalk', not 'SmallTalk') grammar is far superior to the C-like grammar of Java.
    Grammar matters so much for productivity -- not for less keystroking (no one does that anymore), but rather for visual logic density (ie, full utilization of screen real estate). Grammar is a visual aesthetic with profound productivity implications. Readability and density are paramount. SmallTalk blew it and the invisible hand swept SmallTalk away as trash. Indeed, Steve himself asserts that SmallTalk lost a grammar war.

    I really don't understand what you mean here. Apart from the plain fact that Smalltalk simply hasn't been 'swept away as trash', and is still in use today and continues to be influential.

    I find Java productive, but not as productive as Smalltalk.
    There are good reasons for Java's eventual success, and I now personally find it a highly productive language, but to claim that Java was 'meteoric' seems mistaken to me...
    Java went from debut to dominant university language in less than a decade. In the same time it also became the dominant commercial language.

    I don't think anyone would say that Smalltalk was ever the dominant commercial language.
    In your professional lifetime have you ever seen another language that could dominate both university and industry in less than a decade?

    Actually, Pascal did pretty well - Turbo Pascal was pretty much everywhere...
    The only other such language I know is possibly C, which I also consider a meteoric champion. SmallTalk was invented as a teaching language, yet Java dominates today's universities. Since SmallTalk couldn't retain it's homeland, what use is it? Industry doesn't want it. Sun was well aware of SmallTalk in the '80s, yet instead invested heavily in Self and Oak. The greatest JIT and GC geniuses were stolen by SunLabs and encouraged to kill SmallTalk, which they easily did.

    Such a dramatic interpretation! I doubt that the intention was to kill anything, but simply to get performance in to a VM-based language.
    If you were once a SmallTalker and now a Java dork, than you can consider yourself a victim of this technologically justified coup.

    You do seem to want to put things in overly dramatic ways :)

    I am not a Java dork, and I don't see a coup. If you want to understand what use Smalltalk is, then have a look at Squeak, or innovative Smalltalk applications like the Seaside web framework - it is still showing the way for future development, more than 30 years after it was first invented.

    Yet again, Java's rise was nothing like meteoric. It has only just supplanted C++ after all this time! Smalltalk is still in use, and still being used in pioneering ways.

    Things are far, far more complex than the situation you describe.
  231. SunLabs, universities, terse grammar[ Go to top ]

    Vendor independence and looking like C is nothing to do with technological supremacy.
    That's your opinion, with which I disagree.
    Indeed, Steve himself asserts that SmallTalk lost a grammar war.
    I really don't understand what you mean here.
    You said, "The latter point [about grammar] is why it [Java] really took off...". The free market and I agree with you that Java beat Smalltalk at grammar.
    I don't think anyone would say that Smalltalk was ever the dominant commercial language.
    Indeed. There are technological reasons that Smalltalk is commercially stillborn. The most disruptive causes of Smalltalk's arrested development are Java and C++.
  232. SunLabs, universities, terse grammar[ Go to top ]

    Vendor independence and looking like C is nothing to do with technological supremacy.
    That's your opinion, with which I disagree.

    Fair enough!
    Indeed, Steve himself asserts that SmallTalk lost a grammar war.
    I really don't understand what you mean here.
    You said, "The latter point [about grammar] is why it [Java] really took off...". The free market and I agree with you that Java beat Smalltalk at grammar.

    Depends what you mean by 'beat'. Popularity does not necessarily indicate quality. Visual Basic was highly popular as a development language, but I don't think that anyone would argue that it has a good grammar.
    I don't think anyone would say that Smalltalk was ever the dominant commercial language.
    Indeed. There are technological reasons that Smalltalk is commercially stillborn. The most disruptive causes of Smalltalk's arrested development are Java and C++.

    For the third time, Smalltalk is not commercially stillborn. It is still used, and commercial companies are still making money from it.
  233. Vision[ Go to top ]

    Hi Steve,
    So you've met Brian :^) In your response you mentioned innovation:
     If you want to understand what use Smalltalk is, then have a look at Squeak, or innovative Smalltalk applications like the Seaside web framework - it is still showing the way for future development, more than 30 years after it was first invented.

    I remember sitting in a Valtech training course on Java Enterprise Architectures. I think it was 97/98. Anyway the presenter was mapping out Sun's vision for the future of Java. It was all based on Sun's slogan at the time "The Computer is the Network". The vision was relied on the ubiquitous presence of the Java VM in everything and on mobile software agents. Imaging comming into a strange office and your laptop automatically discovering all the networked services available. After looking up and subscribing to services, mobile drivers would be downloaded to your machine ready, all self configuring. So if you wanted to print you would find a selection of printers available to you on the network.

    The vision was based on technologies like JINI and JavaSpaces - basically the idea was that mobile agents would wander around the network exploring any device that had a JVM and federating together to provide services. I remember being really sold on the idea. That was ten years ago what has happened since? What happened to JINI and JavaSpaces? Where are all those Java enabled networked devices, like watches, Cars, televisions, laptops, internet terminals, phones, credit cards etc that we were promised?

    Innovation in the Java space really did seem to die a death some time after the abandonment of applets, why?

    BTW: A similar vision is being realised in Croquet (mobile code). In a previous post I've provided a link to a demo video. It is well worth a look (about 1.5 hrs) if you can find the time.

    Paul.
  234. What happened to JINI and JavaSpaces? Where are all those Java enabled networked devices, like watches, Cars, televisions, laptops, internet terminals, phones, credit cards etc that we were promised?Innovation in the Java space really did seem to die a death some time after the abandonment of applets, why?
    Many phones run Java. Nearly all Unix distros include Java. Kazaa, the leading file swapper, uses Java. Globus, the leading grid platform is based on Java. Grid, utility, and autonomic computing mostly don't happen now without Java. The busiest web sites (eg, weather.com, google.com) and web applications rely on Java. So why are you bringing up stupit stuff like watches and credit cards?
    In a previous post I've provided a link to a demo video. It is well worth a look (about 1.5 hrs) if you can find the time.
    Maybe you could recap about whatever is the fundamental innovation. Please.
  235. Vision[ Go to top ]

    What happened to JINI and JavaSpaces? Where are all those Java enabled networked devices, like watches, Cars, televisions, laptops, internet terminals, phones, credit cards etc that we were promised?Innovation in the Java space really did seem to die a death some time after the abandonment of applets, why?
    Many phones run Java. Nearly all Unix distros include Java. Kazaa, the leading file swapper, uses Java. Globus, the leading grid platform is based on Java. Grid, utility, and autonomic computing mostly don't happen now without Java. The busiest web sites (eg, weather.com, google.com) and web applications rely on Java. So why are you bringing up stupit stuff like watches and credit cards?
    Hi Brian, this is really getting tiresome - Have you heard of mobile software agents, Jini, JavaSpaces? - I suggest you get googling...
    In a previous post I've provided a link to a demo video. It is well worth a look (about 1.5 hrs) if you can find the time.
    Maybe you could recap about whatever is the fundamental innovation. Please.
    This post wasn't intended for you. The innovations I'm referring to are in Smalltalk, so I guess you won't be interested. After all that language is trash right? :^)

    Paul.
  236. Vision and Innovation[ Go to top ]

    Hi Brian,

    That last response was a little "cold". But you have tried my patience. Besides there may be others interested to know this stuff also.

    Back in the day (97/98 I think) Sun had already put Java on a chip. I think they even had Java on a ring (finger ring), so a credit card (smart card) was totally feasible. The idea was that all these devices would gain intelligence through Java and become part of one massive super computer. All these processing devices would become one through the network. It is not a new idea, but it does require code mobility.

    I've just checked and Sun still "sells" JINI technology, but they definately aren't pushing it. Back then JINI was set to take over the world within a year or two...

    I mention Vision, because it appears that the Sun Engineers did have one. So the question is what happened to it? My guess is that they probably hit technical problems - since Java is a bit of a hybrid - half dynamic (interfaces) and half static (compile time type system) and JINI is essentially a dynamic architecture. Also as Alan Kay points out for code mobility each and every VM has to be bit identical (so that you can guarantee that code written and deployed on one VM can migrate and will run on another).

    The way Sun licensed the JVM through a Paper Spec, you cannot guarantee bit compatibility.

    I think probably the biggest reason is that the marketing guys probably weren't interested in the Vision. After all there was plenty of money to be made out of selling Application Servers licenses, and the Java Cartel (Sun, IBM, Oracle, BEA etc) probably didn't want to get too far ahead of their customers who are a conservative bunch anyway. As I remember it, they were busy playing the "my app server is bigger than yours" game with each other and Microsoft (.NET).

    Who knows the real reason, but JINI and JavaSpaces didn't happen.

    Croquet holds out the vision of massively parallel processing using replicated objects across the internet. Scalling well beyond what is possible with the current server centric web model (Croquet relies on peer-to-peer communication).

    Croquet also has a network time (TeaTime) which replicated objects can use to syhchronise their actions using a two phase commit. So as you enter my virtual 3D space, all the objects in that space are imediately replicated to your machine. The objects will compute on both machines bit identically. Any event that occurs on my machine is broadcasted to all the replicated spaces and visa-versa. So I rotate an object in my space, that message will be sent to the replica object on your machine which will calculate it's new coordinates, when all objects in the transaction agree on their new postion then the transaction is committed and each replica will render itself from the perspective of the local camera.

    Just events are sent over the network, so a 3D shared space is possible with as little as 20-30Kb of network bandwith, and remember this is all fully scalable.

    If that isn't innovative I don't know what is.

    Paul.
  237. Grids, P2P, SOAP[ Go to top ]

    The idea was that all these devices would gain intelligence through Java and become part of one massive super computer.
    Do you have any evidence that suggests a rival is leapfrogging Java at this? Is there a rival that even shows promise at this? As far as I know, Java and JavaScript are still the kings at dynamically transfering logic on a heterogeneous network. I know most scripting languages can do this, but none have entrenched themselves for this purpose like Java and JavaScript.
    Back then JINI was set to take over the world within a year or two.
    In the short term, J2EE clustering and Beowulf farming arrested Jini's development. Eg, the Condor universe dynamically distributes native applications, and Jini is limited to pure Java. This matters since the e-science community has mostly wanted to avoid Java for number crunching. Hence the death of JavaGrande, JavaNumerics, etc.

    In the long term, what really hurt Jini most is SOAP, which became the foundation of computational grids and utility and autonomic computing. Even JXTA can't keep up with the proliferation of SOAP grids.

    P2P has historically had a diverse set of rival protocols, such as Jini and Jabber, but the destiny of P2P support is sure to be increasingly dominated by WS-* standards such as WS-Discovery, WS-Addressing, etc. For achieving an amorphous topology, Jini matters less and less.

    The WS-* landscape would still need mobile logic, and that's where Java, JavaScript, and other scripting languages come in. But the role of these processing languages is bound to be reduced mostly to business logic, as it should be.
  238. Croquet - Vision and Innovaton[ Go to top ]

    The idea was that all these devices would gain intelligence through Java and become part of one massive super computer.
    Do you have any evidence that suggests a rival is leapfrogging Java at this?
    Yes.

    http://www.opencroquet.org/

    The issues involved are alot more complex than you make out, there are several white papers on the Croquet website that explore the technical issues involved. JavaScript is not innovative - it was a quick and dirty short term fix, that like you said became intrenched. You cannot implement Croquet with JavaScript.

    As for SOAP and WebServices - they are total non starters for the types of things the Croquet team are doing. These technologies are based on leveraging exisitng incumbent technology. This makes them an easier sell as you can claim that your customer can leverage his existing investment. Soap and Webservices are not innovative - they are merely attempts at standardisation for commercial reasons.

    This is the point I've been making for a while; "commercial" lead "technologies" tend to serve the purposes of their vendors (namely making money), not their customers.

    What I am talking about here is true vision. Alan Kay the inventor of OO, GUI interfaces, the desktop metaphor etc. has always had a vision. The latest incarnation of that vision is being realised in Croquet, but the Vision can be traced back to the dynabook idea in the 1960's:

    http://en.wikipedia.org/wiki/Dynabook

    Will the Croquet technology deliver? Who knows - I think so (I am using it now and it works very well). But at least there is a consistent vision in play, my question is what has happened to Sun's Vision for Java?

    Paul.
  239. You cannot implement Croquet with JavaScript.
    I assume Croquet could be implemented in either Java or JavaScript. I don't consider this a big deal. I've seen grid containers written in Python or Perl. As for collaboration, some phrases to google for include "sophisticated shared browser using the JavaScript" and "Collaborative DOM as a Web Service".
    As for SOAP and WebServices - they are total non starters for the types of things the Croquet team are doing. These technologies are based on leveraging exisitng incumbent technology. This makes them an easier sell as you can claim that your customer can leverage his existing investment. Soap and Webservices are not innovative - they are merely attempts at standardisation for commercial reasons.
    You ignore the marvelous success of Globus, academia's favored grid fabric.
  240. You cannot implement Croquet with JavaScript.
    I assume Croquet could be implemented in either Java or JavaScript. I don't consider this a big deal. I've seen grid containers written in Python or Perl. As for collaboration, some phrases to google for include "sophisticated shared browser using the JavaScript" and "Collaborative DOM as a Web Service".
    As for SOAP and WebServices - they are total non starters for the types of things the Croquet team are doing. These technologies are based on leveraging exisitng incumbent technology. This makes them an easier sell as you can claim that your customer can leverage his existing investment. Soap and Webservices are not innovative - they are merely attempts at standardisation for commercial reasons.
    You ignore the marvelous success of Globus, academia's favored grid fabric.
    Hi Brian,

    Have you watched the video? Have you read any of the white papers or the 1978 Phd dissertation that Croquet is based on? Have you ever used Smalltalk in depth?

    If you had done any of these things you would realise right away why neither JavaScript or SOAP cuts it for Croquet. Croquet is not another anything - it is unique in it's capabilities, and it is definately not a grid.

    Croquet is one of those rare beasts - novel, new innovative. Similar to Seaside in this regard, but a whole lot further reaching in it's vision and it's potential impact. Take the time to do one of the activities I've listed above (the easiest is to watch the video) and you will see for yourself.

    The goal of Croquet is to replace the current internet web browser with something far more powerful. The nearest thing I know to the vision of Croquet is the movie "The Matrix".

    Paul.
  241. If you had done any of these things you would realise right away why neither JavaScript or SOAP cuts it for Croquet. Croquet is not another anything - it is unique in it's capabilities, and it is definately not a grid. Croquet is one of those rare beasts - novel, new innovative.
    That seems too wild to me. When it comes to interactive grids I'm not an outsider -- I'm a developer and published author. And I've seen modern teleimmersion and collaborative VR suites.

    You're the one making a positive assertion -- that Croquet is novel and beyond the capabilities of conventional grids, Java, JavaScript, and SOAP. You've had repeated opportunities to explain your claim, but never have. Why?
  242. If you had done any of these things you would realise right away why neither JavaScript or SOAP cuts it for Croquet. Croquet is not another anything - it is unique in it's capabilities, and it is definately not a grid. Croquet is one of those rare beasts - novel, new innovative.
    That seems too wild to me. When it comes to interactive grids I'm not an outsider -- I'm a developer and published author. And I've seen modern teleimmersion and collaborative VR suites.You're the one making a positive assertion -- that Croquet is novel and beyond the capabilities of conventional grids, Java, JavaScript, and SOAP. You've had repeated opportunities to explain your claim, but never have. Why?
    Hi Brian,

    I'm not into sound bites - I'll leave that to you.

    Like I said before, read the white papers and the dissertation and decide for yourself.

    Paul.
  243. Croquet - Vision and Innovaton[ Go to top ]

    But at least there is a consistent vision in play, my question is what has happened to Sun's Vision for Java?Paul.

    You should read more of James Gosling's blog and articles. Much of Sun's vision seems concentrated on T-Shirt hurling.
  244. Croquet - Vision and Innovaton[ Go to top ]

    But at least there is a consistent vision in play, my question is what has happened to Sun's Vision for Java?Paul.
    You should read more of James Gosling's blog and articles. Much of Sun's vision seems concentrated on T-Shirt hurling.
    Hi Steve,

    I guessed as much. Will do, thanks for the pointer.

    Paul.
  245. Ant, XSL, and o:XML are all languages with rich semantics that conform the XML's (meta)grammar.
    You seem to be asserting that a metagrammer, which is so handy, is a handicap when it comes to internal DSLs. I know you've gotten this backwards, especially since the most widely used internal DSL is JavaScript embedded in HTML. Markup has the only community that's doing anything systemic and/or theoretic with compound source files.
    Typing XML is not a pleasure.
    Who types anymore?! In Eclipse with completion, quickfixes, the clipboard, and refactoring I'd be surprised if half of the characters in my source were keystroked. Your a fan of some dynamic/scripting language, so I guess your happy with vi/emacs. But I think development's future is about automation, at least as much as Eclipse, likely much more.
  246. You seem to be asserting that a metagrammer, which is so handy, is a handicap when it comes to internal DSLs.

    No, that's not what I'm saying. I'm saying XML's metagrammar is an advantage because it allows multiple languages to be mixed together without apriori knowledge of the languages.

    I just don't like typing XML.
    I know you've gotten this backwards, especially since the most widely used internal DSL is JavaScript embedded in HTML.

    Please explain to me how JavaScript is a DSL. I'm not contesting that it's embedded in HTML, I'm contesting that it is somehow specially suited to client-side browser scripting for any reason besides the fact that browsers support it.
    Who types anymore?! In Eclipse with completion, quickfixes, the clipboard, and refactoring I'd be surprised if half of the characters in my source were keystroked.

    I can type faster in a concise language that I can use code completion in a overly verbose one.

    I also hate looking at all those ugly angle-brakets.
    Your a fan of some dynamic/scripting language, so I guess your happy with vi/emacs.

    No, I'm not. I actually find both really annoying.
    But I think development's future is about automation, at least as much as Eclipse, likely much more.

    Greater producitivity gains can be acheived through better abstraction without the overhead of automation.
  247. Please explain to me how JavaScript is a DSL. I'm not contesting that it's embedded in HTML, I'm contesting that it is somehow specially suited to client-side browser scripting for any reason besides the fact that browsers support it.
    A DSL is a specialized language invented for a narrow problem. You coyly act as though you've no idea what domain JavaScript was invented for, which happens to be JavaScript's specialty then and today. Your coyness allows you to pretend that JavaScript lacks a specific domain, which in turn conveniently allows you to dismiss angle bracket documents being the dominant host of internal DSL.
    Greater producitivity gains can be acheived through better abstraction without the overhead of automation.
    All computational abstractions need automation in practice. To deny this is to misunderstand either abstraction or automation or both.
  248. A DSL is a specialized language invented for a narrow problem. You coyly act as though you've no idea what domain JavaScript was invented for, which happens to be JavaScript's specialty then and today. Your coyness allows you to pretend that JavaScript lacks a specific domain, which in turn conveniently allows you to dismiss angle bracket documents being the dominant host of internal DSL.

    Ok, maybe instead of saying JavaScript isn't a DSL, because obviously somebody designed it to be a DSL, I should say it absolutely fails in its design.

    When I think of a DSL, I think of languages like Maple and Mathmatica, Prolog, Awk, SQL, the various flavors of shell scripts, XSLT, etc. These all take on a form that is very useful for the tasks for which they are designed, but are clunky at best outside of their domains.

    Languages like C++, Python, Ruby, and Smalltalk are designed to allow programmers to create internal DSLs, which are really just really well crafted libraries and frameworks.

    Java, on the other hand, takes the position that a library-is-a-library, and it shouldn't be allowed to look like the core language. But we should give credit where credit is due - there are countless successful Java libraries and frameworks out there. The constructs that make a language suitable for creating internal DSLs typically are either syntatic sugar or beyond most programmers (or require more thought than economically viable).
  249. A DSL is a specialized language invented for a narrow problem. You coyly act as though you've no idea what domain JavaScript was invented for, which happens to be JavaScript's specialty then and today. Your coyness allows you to pretend that JavaScript lacks a specific domain, which in turn conveniently allows you to dismiss angle bracket documents being the dominant host of internal DSL.
    Ok, maybe instead of saying JavaScript isn't a DSL, because obviously somebody designed it to be a DSL, I should say it absolutely fails in its design.When I think of a DSL, I think of languages like Maple and Mathmatica, Prolog, Awk, SQL, the various flavors of shell scripts, XSLT, etc. These all take on a form that is very useful for the tasks for which they are designed, but are clunky at best outside of their domains.Languages like C++, Python, Ruby, and Smalltalk are designed to allow programmers to create internal DSLs, which are really just really well crafted libraries and frameworks.Java, on the other hand, takes the position that a library-is-a-library, and it shouldn't be allowed to look like the core language. But we should give credit where credit is due - there are countless successful Java libraries and frameworks out there. The constructs that make a language suitable for creating internal DSLs typically are either syntatic sugar or beyond most programmers (or require more thought than economically viable).
    Hi Erik,

    You've made the mistake of taking notice of Brian again. JavaScript is a dumb down version of Self, a prototype language Sun Research was working on based on Smalltalk.

    Self has nothing to do with DSL's. Self is a general purpose language. The reason why it ended up as JavaScript is because Sun/Netscape needed a dynamic language for their "browser platform" quick! (those marketing boys again).

    How are you getting on with your Python metaprogramming? Have you had a chance to read the Smalltalk metaprogramming article I posted?

    Leave Brian to his own devices. For him ignorance is bliss...As for me I'd love to learn something new, especially about Python and metaprogramming.

    Paul.
  250. Ok, maybe instead of saying JavaScript isn't a DSL, because obviously somebody designed it to be a DSL, I should say it absolutely fails in its design.
    "Absolutely fails"? There's more JavaScript code travelling the Internet backbone than code built from any other programming language. If that's failure, then what's your success criterion?!
    Languages like C++, Python, Ruby, and Smalltalk are designed to allow programmers to create internal DSLs...
    Horesefeathers! Ruby might have been designed for interpreting internal DSLs, I don't know for sure. But the other languages you list were surely not designed to interpret internal DSLs. Ruby qualifies as a internal DSL processor since it publicizes hooks into the lexeme stream. XML also does this. C++ and Python (and maybe Smalltalk) don't expose the token stream, so novel grammars can't easily be embedded. If internal DSLs are a design goal, then these languages you mention are flops.
  251. Ok, maybe instead of saying JavaScript isn't a DSL, because obviously somebody designed it to be a DSL, I should say it absolutely fails in its design.
    "Absolutely fails"? There's more JavaScript code travelling the Internet backbone than code built from any other programming language. If that's failure, then what's your success criterion?!
    Languages like C++, Python, Ruby, and Smalltalk are designed to allow programmers to create internal DSLs...
    Horesefeathers! Ruby might have been designed for interpreting internal DSLs, I don't know for sure. But the other languages you list were surely not designed to interpret internal DSLs. Ruby qualifies as a internal DSL processor since it publicizes hooks into the lexeme stream. XML also does this. C++ and Python (and maybe Smalltalk) don't expose the token stream, so novel grammars can't easily be embedded. If internal DSLs are a design goal, then these languages you mention are flops.
    Brian,

    Did you read my last post? Whose definition of DSL are you using?

    I don't know what is worst, when you where gasing about MDA or now. Please, Please, Please stop posting this il-informed nonsense.

    You are adding very little to this forum. And like I said before it would be better if you just went away (lexemes for heavens sake, what ever next?:^)).

    Paul.
  252. Whose definition of DSL are you using?
    "DSLs are languages (or most often, declared syntaxes or grammars) with extremely specific goals in design and implementation."
    http://en.wikipedia.org/wiki/Domain-specific_programming_language

    This definition mentions syntax and grammar, which I've focused on. Whereas you've disctractingly dwelt on libraries, which begs which obscure (perhaps uncitable) definition you use.
  253. Whose definition of DSL are you using?
    "DSLs are languages (or most often, declared syntaxes or grammars) with extremely specific goals in design and implementation."http://en.wikipedia.org/wiki/Domain-specific_programming_languageThis definition mentions syntax and grammar, which I've focused on. Whereas you've disctractingly dwelt on libraries, which begs which obscure (perhaps uncitable) definition you use.
    OK Brian,

    Let me be more specific. What definition of internal DSLare you using?

    The only definition I know is Martin Folwers given he made up th term. The type of DSL you are talking about and is described on wikipedia, Fowlers calls an external DSL.
    The most common unix-style approach is to define a language syntax and then either use code generation to a general purpose language, or write an interpreter for the DSL. Unix has many tools that make this easier. I use the term External DSL to refer to this kind of DSL. XML configuration files are another common form of External DSL. ....
    The lisp and smalltalk communities also have a strong tradition of DSLs, but they tend to do it another way. Rather than define a new language, they morph the general purpose language into the DSL. (Paul Graham describes this well in Programming Bottom-Up.) This Internal DSL (also referred to as an embedded DSL) uses the constructs of the programming language itself to define to DSL. This is a common way to work in any language, I've always thought of defining functions in such a way to provide a form of DSL for my problem. But lispers and smalltalkers often take this much further.


    Both Erik and I have pointed this out several times. What is your point? Are you denying the Lisp and Smalltalk tradition Fowler describes? Or are you saying that you do not value it?

    From what you've written you clearly do not understand it.

    Paul.
  254. EDSL, DSEL, Internal DSLs[ Go to top ]

    The only definition I know is Martin Folwers given he made up th term. ... This Internal DSL (also referred to as an embedded DSL)...
    In these two sentences you contradict yourself. You nearly imply that Fowler has a monopoly on the definition as its inventor. But Fowler freely admits that he stole an existing notion. Without adding anything of value to its definition, Fowler took the long-existing concept of "embedded DSL" (EDSL) and renamed it as "internal DSL". And others had also named this a "domain-specific embedded language" (DSEL).

    Googling for the phrase "embedded domain specific language" gives hundreds of hits. One of them was published in the first Conference of Domain-Specific Languages, nearly ten years before Fowler's content.

    Erik claimed that Python, C++, and Smalltalk were "designed to allow programmers to create internal DSLs". This isn't what historians are saying. Rather, "Tcl is of course famously designed to be a host for EDSLs... Lisp is another big EDSL host - in fact this is part of the culture of Lisp...".

    Whereas of Python (that Erik hailed) it has been asked "What is a DSL, and is it possible in Python?" The same web page goes on to flame Python's DSL readiness due to several severe technical hurdles, and concludes: "Zope is a very clear example that its quite possible to build an internal DSL in Python. Whether they built it in a clear, easy to use way is up for someone else to debate."

    Erik's claim of "Python ... designed ... to create internal DSLs" seems farfetched. Why does he cite Python, C++, and Smalltalk, when the experts are citing Lisp, TCL, and Ruby?

    That said, I reluctantly accept the definition of internal DSL that Paul gave. This blurb swung me: "Even particular idiomatic coding styles are essentially DSLs. Martin Fowler calls these 'internal DSLs'." This irks me since it doesn't imply a novel grammar, which I intuitively felt was essential to any claim of being a language.

    On this discomfort with the too lax definition of internal DSL, I'm not alone:
    During the conference I noticed that “domain-specific” became a bit like a mantra: participants started to see domain-specific languages everywhere, even when just adding their own code template or similar to a general purpose language (like embedded SQL). Martin Fowler showed also some of these “internal” DSL languages. After he explained such a sample language, its domain-specific structure and proposed abstractions became easy to understand. But were all these language examples truly domain-specific languages? This question was asked by many in the conference.
  255. Googled Acronyms and made-up Baloney[ Go to top ]

    Hi Brian,

    As Erik has said:
    You need to focus on the objective - bringing programming closer to the domain
    All this nonsense about "novel grammar" - I still don't see your point. If you showed just a little humility, I wouldn't mind educating you.

    But as it stands, I just find your crass ignorance and arrogant tone infuriating. As I've said several times before, you are adding little to this discussion, so do the decent thing...


    Paul.
  256. All this nonsense about "novel grammar" - I still don't see your point.
    This isn't just my point; it's shared by others. I cited someone else's complaint of a conference that (in his words) "participants started to see domain-specific languages everywhere, even when just adding their own code template or similar to a general purpose language". You or Erik boldly asserted that API and internal DSL are synonymous.

    If a definition is so broad that it can include almost all code ever written, then two problems emerge. First, the definition utterly lacks value. It raises nothing new. APIs are a conquered notion. Second, since the definition regards conquered notions such as idioms and APIs, it has no place in a discussion on the future of software development and what lays "beyond Java".

    You might not realize it, but we're in this thread to chat about innovation. So far Paul and Erik's answer to this call for innovation has been blather about the supposedly wonderous future of some stale languages: C++, Python, LisP, and SmallTalk. I'm thankful for the invisible hand and that it isn't theirs to misguide.
  257. You or Erik boldly asserted that API and internal DSL are synonymous.

    No. I asserted that an Internal DSL is library, not that a library is a DSL.
    If a definition is so broad that it can include almost all code ever written, then two problems emerge.

    Very little code that I've seen meets my definition of an interal DSL.
    APIs are a conquered notion.

    If they are so thoroughly conquered why are so many of them so poorly designed?
    So far Paul and Erik's answer to this call for innovation has been blather about the supposedly wonderous future of some stale languages: C++, Python, LisP, and SmallTalk.

    I don't think C++ has much of a future beyond where it already is - one of the most widely used languages on the planet, but slowly loosing ground. That being said, I think there's a lot of be learned from C++ in terms of language design, both good and bad.

    Python is great but I think it has too many flaws to ever reach the critical mass of Java or C++. I do hope Python's syntax (yes, my mean semantically significant whitespace, among other things) makes it into the next great language.

    There will never be a large enough percent of the population who grok LISP for it to take off. That being said, I think it's powerful enough that it will be around forever.

    I'm pretty sure what I want in a language isn't out there. My opinion is that the purpose of a language should be to enable the programmer to effectively communicate with the computer. Keep in mind I'm saying "communicate with the computer," not "tell the computer what to do." Dynamic languages allow the programmer to be less explicit with the computer (so they can express less well-formed thoughts), and makes the computer give them faster feedback - by shrinking the compilation and deployment times, and by allowing direct interaction (try the Python console).

    I think the advantage of dynamic typing really comes at the early stages of design/coding, which you haven't really worked out your classes and interfaces yet. Static typing forces you to at least partially work this out, otherwise the compiler will complain. So the programmer is forced to express something before he really understands it.

    However, at some point before the code is "production ready," the programmer (and the rest of his team) better understand the code and why it works. In this case, I think static typing is a good thing, because once a concept as been firmed up, the compiler can validate (to an extent) that it's being correctly applied.

    The problem is the programmer has to go to a lot of work to express type information to the compiler, but doesn't get much out of it (some performance, so rather trivial checking). The programmer should only tell the computer soemthing if the computer is going to do something useful with it. Useful certainly includes identifying mistakes at compile time, not just runtime. In other words, I'm a huge believer in static analysis. But it has to be effective and useful.
  258. My opinion is that the purpose of a language should be to enable the programmer to effectively communicate with the computer. Keep in mind I'm saying "communicate with the computer," not "tell the computer what to do." ... I think the advantage of dynamic typing really comes at the early stages of design/coding...
    That's one of the better arguments for MDA. UML isn't burdened with the verbosity typical of statically typed language. And as I've explained much earlier in this thread, MDA is dreamy at prototyping:
    The analyst never loses handcrafted design or implementation logic, these were generated. This gives the analyst more freedom to explore the subject matter. He can speculatively change his model and regenerate quickly. Within the knowledge domain, he can take greater risks than a traditional programer, since the odds and cost of a mistake is less.
  259. Also from Wikipedia[ Go to top ]

    A DSL is somewhere between a tiny programming language and a scripting language, and is often used in a way analogous to a programming library. The boundaries between these concepts are quite blurry, much like the boundary between scripting languages and general-purpose languages.

    I like this part:
    To summarize, an analogy might be useful: a Very Little Language is like a knife, which can be used in thousands of different ways, from cutting food to cutting down trees. A DSL is like an electric drill: it is a powerful tool with a wide variety of uses, but a specific context, namely, putting holes in things (although it might also be used to mix paint or remove screws). A GPL is a complete workbench, with a variety of tools intended for performing a variety of tasks. DSLs should be used by programmers who, looking at their current workbench, realize they need a better drill, and find that a specific DSL provides exactly that.

    An internal DSL is like buying a better drill and adding it to your workbench, thereby making your workbench that much more effective. It's like buying the ~$100 cordless drill. An external DSL is like the $1000 hammer-drill you need for drilling concrete. 99% of the time you don't need it - you want something cheaper and lighter. Unless, of course, you drill concrete every day.

    So most of the time we just use libraries and frameworks with Java - except for accessing the database - in which case we use SQL/HQL/your favorite ORM dialect here. Why? Because we access the database every day, and it's a critical part of what we do. So we have a DSL for it.

    You need to focus on the objective - bringing programming closer to the domain - rather than the manifestration - DSLs with unique grammars.
  260. Yes, JavaScript absolutely fails. So does x86. Windows does not, because it wasn't designed for networked computers. So, of course, it's insecure and has all sorts of problems when connected to a giant network.

    Whether or not you consider JavaScript a success depends on your metric. When I think about JavaScript as a DSL, after suppressing my gag reflex, I ask myself: If Java could be embedded in a similar fashion to Java script, would it as a language be better? Could I imagine a language that would be better than plain-old Java for client-side web script? Is there anything about JavaScript that makes it useful other than the fact that every web browser supports it?

    Yes to the first two questions, no to the third. But, of course, being that JavaScript is what everything supports, the fact that ill-suited to just about any purpose doesn't matter, because nothing else is even playing the game.
  261. The next big thing...[ Go to top ]

    Hi Guys,

    Been away, but the time off gave me a chance to finally read all of Bruce Tates "Beyond Java" book. In it Bruce rightly points out that Sun saw the Internet as an opportunity to wrong foot Microsoft by challenging the Windows platform with the Web Browser/JVM platform.

    Tactically Sun has been successful, but Strategically Sun has failed IMO in creating a ubiquitous web application platform. Where are all those Java thin client devices we where all suppose to be using by now? Why has Sun's share price plummeted over the last 10 years?

    The reasons for this failure IMO are numerous, but in short they can be summed up by short-termism and failing technology. Marketing hype on its own will always fail in the longer term. IMO C++ has failed (and partially killed off) OO development, and Java has failed web browser development (what happened to all those applets?). Server based http web applications as we know them today are a horrible kludge. It has been over 8 years since Sun came out with the Servlet API and we are still seeing new "web frameworks" emerging. What does that tell you?

    So where does the future lay? Well I think the next big thing will still be an internet application platfrom, but it will be a platform that truly moves things well beyond http and html.

    As for internal DSL’s, the idea is not new and it wasn't invented by Martin Fowler. Martin Fowler has just given us a vocabulary that we can use to discuss ideas that have been around for a long while. It is not by chance that Smalltalk allows for operator overloading. LISP takes the idea even further and breaks programs down to "ATOMS", an ATOM can be any character you like (I think:^)), so you can overload everything.

    In the design of the "3D graphics DSL in Croquet". The Croquet team describe their "API" as a "semi-retained architecture". The idea is that they layer DSL’s one on top of the other, with ever increasing abstraction. So a lay programmer can use a high level DSL, and an expert can drop down to a lower level DSL when needed. This is how they described the idea:
    The philosophy behind Croquet’s TeaPot rendering engine is based on allowing the programmer complete access and control of the underlying graphics library, in this case OpenGL, while at the same time, providing a rich framework within which to embed these extensions with a minimal level of effort. This allows the naïve graphics programmer and 3D artist a way to easily create interesting graphic artifacts with minimal effort, yet allows the expert the ability to add fundamental extensions to the system without the need to modify the underlying architecture.

    The weakness with this approach as Martin Fowler points out, is that the lay programmer is also exposed to all the underlying DSL’s, which he could find confusing (the loaded gun scenario). Languages like Java, try to remove the "loaded gun" from programmers, assuming that they will only use it to shoot themselves in the foot. As a consequence the ability for innovation within the language itself is greatly reduced. Tying the ideas of an Image and a language work bench together would allow for Croquet style DSL’s and "lay programmer safety". In fact the croquet team are doing something similar themselves by creating an end user scripting language:

    http://croquet.doit.wisc.edu/wiki/tiki-index.php?page=ScriptingLanguage


    So I see Croquet as pointing the way towards the next big killer app. I think it will be platform based as Sun intended. It will be ubiquitous and it will run over the internet, just like the current web. It will retain the common web browser 2D UI for backward compatibility, but in addition it will provide a new much richer 3D UI. It will be fully dynamic, exploiting late binding, it will overcome all the versioning issues associated with applets, properly facilitating mobile code, and it will be fully OO.

    I really do think that Croquet or something very similar will be the next big thing. In the same way that C++ lost its crown because it couldn't be used to build frameworks easily, so will Java loose its crown because it cannot be used to build 3D, dynamic, internet based collaborative applications.

    Paul.
  262. The next big thing...[ Go to top ]

    Why has Sun's share price plummeted over the last 10 years?The reasons for this failure IMO are numerous, but in short they can be summed up by short-termism and failing technology.

    I think it is the opposite - Sun have aimed for the long term, and stuck with it's own plans for operating systems and hardware designs.
    Marketing hype on its own will always fail in the longer term. IMO C++ has failed (and partially killed off) OO development, and Java has failed web browser development (what happened to all those applets?).

    I think if Microsoft had not held back the progress of Java VMs on most client systems, the situation with applets might be very different today.
    In the same way that C++ lost its crown because it couldn't be used to build frameworks easily, so will Java loose its crown because it cannot be used to build 3D, dynamic, internet based collaborative applications.Paul.

    Of course it can. Just take a look at the Java 3D API In Action examples page:

    http://java.sun.com/products/java-media/3D/in_action/application.html

    You will see many examples of 3D internet based collaborative applications, in areas such as games, geography and medicine.
  263. The next big thing...[ Go to top ]

    Why has Sun's share price plummeted over the last 10 years?The reasons for this failure IMO are numerous, but in short they can be summed up by short-termism and failing technology.
    I think it is the opposite - Sun have aimed for the long term, and stuck with it's own plans for operating systems and hardware designs.

    Are you sure? I guess thats why they are moving towards x86 processors and Linux.
    Marketing hype on its own will always fail in the longer term. IMO C++ has failed (and partially killed off) OO development, and Java has failed web browser development (what happened to all those applets?).
    I think if Microsoft had not held back the progress of Java VMs on most client systems, the situation with applets might be very different today.
    I don't think we can blame Microsoft totally for this one. Even if Sun and Netscape had got their act together and got consistent and stable VMs out there, the Java approach would still of led to problems. First of all, the applet approach as I remember did not consider versioning. This is a lesson that the Croquet folks have learnt at Java's expense. Also Alan Kay makes a very good point that any VM specification based on a written "paper" spec, will always lead to incompatible implementations. In fact Alan Kay wanted to use Java for Croquet, but it was this single point that convinced him that he would have to come up with his own VM, just to ensure that it was bit compatible on all platforms.
    In the same way that C++ lost its crown because it couldn't be used to build frameworks easily, so will Java loose its crown because it cannot be used to build 3D, dynamic, internet based collaborative applications.Paul.
    Of course it can. Just take a look at the Java 3D API In Action examples page:http://java.sun.com/products/java-media/3D/in_action/application.htmlYou will see many examples of 3D internet based collaborative applications, in areas such as games, geography and medicine. Hi Steve, You are an intelligent guy. What I am talking about is a distributed shared 3D virtual space, with 3D objects that are replicated across several clients. Clients communicate events (messages) within the shared space through broadcasts. This can work peer-to-peer or a central "world base server" can be used to store common state that is then replicated to joined clients. Croquet allows people to work and collaborate together over the internet in a shared 3D virtual world. Take a look at the Croquet demo video, and you will see what I mean:

    http://video.google.com/videopreviewbig?q=Squeak&time=0&page=1&docid=-3163738949450782327&urlcreated=1125158106&chan=Uploaded&prog=Croquet%3A+A+Collaboration+Architecture&date=Fri+Aug+5+2005+at+6%3A45+PM+PDT

    This link seems to be down at the minute, so if you don't mind reading a bit then scan the material at the main Croquet site:

    http://www.opencroquet.org/index.html

    Paul.
  264. The next big thing...[ Go to top ]

    Are you sure? I guess thats why they are moving towards x86 processors and Linux.

    They have been supporting those to some degree for a very long time. I remember SunOS on a 386...
    Also Alan Kay makes a very good point that any VM specification based on a written "paper" spec, will always lead to incompatible implementations.

    Mentioning Alan Kay is always a good way to get me to listen...

    Hi Steve, You are an intelligent guy.

    Thanks :) May I return the compliment.
    What I am talking about is a distributed shared 3D virtual space, with 3D objects that are replicated across several clients. Clients communicate events (messages) within the shared space through broadcasts. This can work peer-to-peer or a central "world base server" can be used to store common state that is then replicated to joined clients. Croquet allows people to work and collaborate together over the internet in a shared 3D virtual world.

    Like the University of Washington's Virtual Playground?

    Or University College London's ActiveWorlds?

    These are projects showing how this could be done in Java from years ago.

    I'll certainly take a look at Croquet though - anything Alan Kay does is worth looking at.
  265. The next big thing...[ Go to top ]

    What I am talking about is a distributed shared 3D virtual space, with 3D objects that are replicated across several clients. Clients communicate events (messages) within the shared space through broadcasts. This can work peer-to-peer or a central "world base server" can be used to store common state that is then replicated to joined clients. Croquet allows people to work and collaborate together over the internet in a shared 3D virtual world.
    Like the University of Washington's Virtual Playground?Or University College London's ActiveWorlds?These are projects showing how this could be done in Java from years ago.I'll certainly take a look at Croquet though - anything Alan Kay does is worth looking at.

    Yes - sometimes something is possible using a given technology, but would be much easier using a technology that is more appropriate. The examples that come to mind are OpenDoc and CommonPoint component frameworks (remeber them). In Java, building such frameworks is significantly easier than in C++, hence the success of Eclipse and it's plugins. The missing ingredient in C++ is late binding, Java interfaces help significantly here, but still do not go all the way.

    My guess is that the a similar difference in ease exists between "collabortive 3D component" internet applications in Smalltalk/Squeak (Croquet) versus Java. One of the Java 3D sites you listed quoted a 5GByte download just to run the demo!

    I haven't looked at any 3D Java stuff before, so I be interested in hearing how you think Croquet compares.

    Cheers,

    Paul.
  266. In-language DSL[ Go to top ]

    It uses OOP w/operator overloading ...In other words, it provides a symbolic math DSL ...
    That ain't a DSL. A DSL has a novel grammar. Your math example doesn't even impose new semantics upon C++. Your understanding of DSL theory is obviously half baked.

    I've used Maple a fair amount. I would consider Maple a domain specific language for mathematics. The syntax GiNaC uses for building and evaluating expressions, while purely C++, doesn't look materially different.

    So I guess I don't care if it fits your definition of a DSL or not, because it certainly brings the language much closer to the problem domain in an intuitive way.
  267. In-language DSL[ Go to top ]

    So I guess I don't care if it fits your definition of a DSL or not...
    Dude, look at all of the example DSLs given by Fowler or his IntelliJ pals. Each imposes a novel grammar that expects a special parser (or in the case of XML, a custom schema). Then you give your spurious math example without a novel grammar.
  268. In-language DSL[ Go to top ]

    So I guess I don't care if it fits your definition of a DSL or not...
    Dude, look at all of the example DSLs given by Fowler or his IntelliJ pals. Each imposes a novel grammar that expects a special parser (or in the case of XML, a custom schema). Then you give your spurious math example without a novel grammar.

    Brian,

    This is like spoon feeding a baby. You do not make it clear which "novel grammar" examples you are refering too. In his paper Fowler only provides examples of external DSLs from what I remember. What both I and Erik are talking about are internal DSLs

    Now you are correct in thinking that external DSLs require a parser and in so doing can contain novel grammar. This is clearly not the case for internal DSLs.

    Brian if you have nothing positive to add, then please just go away.

    Paul.
  269. In-language DSL[ Go to top ]

    You do not make it clear which "novel grammar" examples you are refering too.
    Here's the link I gave to Fowler's example internal DSL for Java. You can clearly see he's invented a novel grammar, with a colon operator that the parser must recognize in a novel way:
    http://www.martinfowler.com/articles/mpsAgree.html

    You made the claim that operator overloading in the internal DSL presupposes operator overloading in the host language. I showed in Fowler's own example internal DSL that your claim is indefensible. Then you told me to "go away", which to me seems a typical naked Amerikan emperor reaction. Why can't you directly address my point about the colon operator in Fowler's example internal DSL, when it was you who insisted on the importance of Fowler's research?
  270. I Hate JavaBeans[ Go to top ]

    I agree about JavaBeans.
    We use collections... w/ Groovy.
    No way we could have done Groovy w/ J2EE. Groovy can use Java jars... unlike ruby.

    .V

    check out http://roomity.com for latest tech stories.
  271. Thanks for your thoughts.[ Go to top ]

    I'll blog a little more about my experiences today, but I thought I should talk about them some here first, because this community has meant so much to me. I should probably point out what I have to lose.

    - All of my clients are Java clients.

    - I've got 3 JavaOne best sellers in 4 years. All of my best books are about Java.

    - I've worked to establish a brand in the Java business. Indeed, my company name, is based on the Java brand.

    Look...no technical book is going to make an author a lot of money. Generally, for a book like this, I'll get less than two bucks a book. A very good Java book sells around 10,000 copies, and if you put much effort into it at all, you can see that you'd make more money flipping burgers. Writing is about the love of the craft. So you should be asking yourself, why would a Java evangilist cut his own throat like this? And the answer is this: for my readers, I think it's the right thing to do. For at least one class of applications, web-based apps on a relational database where you control your own schema, you'd be crazy not to consider Rails. It's just too productive. I didn't understand either until I used the framework in anger. What's different?

    - Rapid turnaround time. Save and reload. That's it. Very compelling.

    - Starting point. You start with basically a working CRUD app per table. Windows to do list, show, edit, delete, new. Sure, you'll rewrite some, but you can put something in front of your customer instantly. That's compelling. To get all of this, you can either do a one-time code generation pass (it's not that much code in Rails), or you can do a scaffold :person command.

    - Reduced configuration. For us it was a 10-1 reduction. I think that's fairly typical. More for some extreme struts apps, less if you don't count annotations (but I think we're often making a big mistake with annotations.)

    - Better mapping strategy for the Rails problem set. Here's an active record class, that maps a Person class to a table called people, and belongs to a department:

    class Person < ActiveRecord::Base
      belongs_to :department
    end

    End of story. You want syntax? That's syntax. You get a property for every column in the database, custom finders like find_by_last_name, and other goodies to manage the belongs_to relationship. No repetition of column names. No code generation.

    - Much better AJAX, and cleaner view technology.
    As to quick and dirty, I used Java because it was clean, although slow. I didn't use PHP or Perl because I think they are quick and dirty. I think Ruby on Rails is quick and clean.

    But it's not for every Java project. I think that several areas are basically safe for Java: hard core ORM, heavy threading, two-phased commit. Rails is for one well-defined niche, but it's probably the most important one, and that niche is rapidly expanding. More comments in my blog today...stay tuned.
  272. Reduced configuration...[ Go to top ]

    I'm learning Ruby now, so I can't really say a whole lot about it. However, having tried out Rails, the reduced configuration is really nice.

    That said, it's only nice until you have to do something out of the ordinary. The Rails list and IRC channel are *full* of people asking how to do things a little different from the standard way, and the response is basically that you have to recreate some or all of that configuration that you initially lost. Doing portlet-style development is one example, your mapping file doesn't look that different from a typical Spring configuration before long.

    Maturity of the APIs and the developers is a big ding against Rails. I don't want to get into a philosophical debate over DB constraints, but I think a lot of experienced developers here appreciate why DB constraints are valuable. More importantly, our ORM tools in the Java world can model simple constraints like foreign key refs with no problem. In the Ruby world they're still arguing that your constraints should be enforced solely at the controller layer, and ActiveRecord, convenient as it is, has no knowledge of foreign key constraints.

    If you're writing an app which can be written in 4 days, it's likely that these sorts of concerns don't really pertain. I will posit that a significant percentage of the web apps which get written today in Java are good candidates for the Rails framework. Perhaps the percentage is as much as half. However, for those of us who live in a world where we don't control our schema, or where our schema has lived beyond 3 major revisions to the app, or where we need the rich, enterprise-class APIs that exist in the Java world, Rails is not really a compelling solution.

    I think everyone who writes Java apps should also know how to write Rails apps. You should be able to judge when something you're writing is trivial enough to throw away most of the complexity you get writing big Java applications. But the need to write big Java applications isn't going to go away, and there are still going to be plenty of Java jobs around. They'll probably pay better too, because if you need that complexity, you realize it costs, at all levels. ;)

    --james
  273. Reduced configuration...[ Go to top ]

    Great discussion, and thanks. Some comments:
    I'm learning Ruby now, so I can't really say a whole lot about it. However, having tried out Rails, the reduced configuration is really nice.That said, it's only nice until you have to do something out of the ordinary. The Rails list and IRC channel are *full* of people asking how to do things a little different from the standard way, and the response is basically that you have to recreate some or all of that configuration that you initially lost.
    Doing portlet-style development is one example, your mapping file doesn't look that different from a typical Spring configuration before long.Maturity of the APIs and the developers is a big ding against Rails. I don't want to get into a philosophical debate over DB constraints, but I think a lot of experienced developers here appreciate why DB constraints are valuable. More importantly, our ORM tools in the Java world can model simple constraints like foreign key refs with no problem. In the Ruby world they're still arguing that your constraints should be enforced solely at the controller layer, and ActiveRecord, convenient as it is, has no knowledge of foreign key constraints.

    A good, and fair criticism. I've also made this point.
    If you're writing an app which can be written in 4 days, it's likely that these sorts of concerns don't really pertain. I will posit that a significant percentage of the web apps which get written today in Java are good candidates for the Rails framework. Perhaps the percentage is as much as half.

    This is the crux of my argument in Beyond Java. But I think this application set represents Java's original base. I actually argued slightly more than half. Most of us take a big fat relational database and front it with a web UI. Rails just solves this stuff much better than Java does. More if you add better ORM into Rails, and it's coming, with a pluggable Active Record. ORM is new in Ruby, but I think OG has a whole lot of things right. This is a matter of time to me.
    However, for those of us who live in a world where we don't control our schema, or where our schema has lived beyond 3 major revisions to the app, or where we need the rich, enterprise-class APIs that exist in the Java world, Rails is not really a compelling solution.I think everyone who writes Java apps should also know how to write Rails apps. You should be able to judge when something you're writing is trivial enough to throw away most of the complexity you get writing big Java applications. But the need to write big Java applications isn't going to go away, and there are still going to be plenty of Java jobs around. They'll probably pay better too, because if you need that complexity, you realize it costs, at all levels. ;)--james

    I absolutely agree.
  274. Reduced configuration...[ Go to top ]

    Most of us take a big fat relational database and front it with a web UI.

    "Big fat relational database?" If it's really that big and fat relational legacy database with cryptic field names and hundreds of fields I encounter sometimes, that's just exactly the case against CRUD-based techniques. If you need to put a web frontend onto such an ugly thing, you'd rather make a layer of DAOs above it, isolating the rest of your application from that ugliness.
  275. Most of whom?[ Go to top ]

    "...Most of us take a big fat relational database and front it with a web UI...."

    Speak for yourself, or for most of yourself. Most of US have to try to maintain or integrate packaged "enterprise" applications. The big fat relational databases that underlie these kinds of applications yield nothing useful when approached in a CRUD style. The only way you get any real work done is to layer on thick slices of denormalization and business logic.

    Perversely enough, there is one popular little number (hint: it has an "eep" sound in its name) that has actually tried to align its database with its web-based UI, putting fields in the core transactional tables that have no other purpose than to drive details of the runtime behavior of the UI. Let's see: I had to bolt a web front-end onto this thing in a perishing hurry, so I got a framework, and now I have a UI, but it biases me towards a CRUD style, so...let's tamper with the data model to make it more CRUDdable by my CRUDdy new UI! Yeah, that's the game....
  276. I'm learning Ruby now, so I can't really say a whole lot about it. However, having tried out Rails, the reduced configuration is really nice.That said, it's only nice until you have to do something out of the ordinary. The Rails list and IRC channel are *full* of people asking how to do things a little different from the standard way, and the response is basically that you have to recreate some or all of that configuration that you initially lost. Doing portlet-style development is one example, your mapping file doesn't look that different from a typical Spring configuration before long.

    But you almost never have to replace *all* of the configuration. That's the point. By providing meaningful defaults, you save your users lots of work. Ruby in general is this way, not just Rails. I'd kill for named parameters and defaults in Java. So I can do something like this:

    class Item < ActiveRecord::Base
      belongs_to :person
    end

    and do nothing to use the expected person_id column for the expected case... or this:

    class Item < ActiveRecord::Base
      belongs_to :person, :foreign_key => "ssn"
    end


    This is about Ruby, not Active Record. You get meaningful defaults and convention over configuration where it works, and you can override the defaults where it works. I think Java has much to learn here.
  277. BINGO![ Go to top ]

    You nailed RoR's weakness on the head: Wizard code.

    Why? It's basically a bunch of wizards. As we all know from code wizards, if you need to do something they don't do, you're stuck having to do it from the ground up again. But in RoR's case, since the entire community assumes that won't happen, there is very little documentation on how to do stuff from the ground up in Rails. Once it does happen, and less standard applications are documented, then the configuration will probably equal most mature java frameworks.

    Ruby will always have advantages due to its superior syntactic sugar though.

    This factor-of-ten statements being thrown around are almost always snake oil. There's fundamental complexity in systems, folks, and when the rubber hits the pavement in ugly legacy enterprise environments, there is no magic Ruby bullet, just like there isn't a magic Indian Outsourcing bullet, and there's no magical code generation layer bullet.
  278. BINGO![ Go to top ]

    You nailed RoR's weakness on the head: Wizard code.

    I have not used RoR, but I strongly suspect that the main reason it is more productive initially is because it is essentially a wizard/template i.e. a web framework generator.

    Is this a good idea? Sure why not. Is this something that is somehow unique to RoR and can only be done with the Ruby language? I fail to see how or why. Exactly what language feature exists in Ruby that makes this impossible in Java?

    Back when I used to do ActiveX programming, Microsoft had a whole template and wizard system for generating ActiveX objects. Click new project, select ActiveX wizard, presto change-o instant working ActiveX component. Talk about productivity :-) ActiveX programming can be a little tricky and so wizards like this can seem amazingly productive to newbies. However, the code was very hard to understand, and doing something outside of the "anticipated" could turn out to be extremely difficult. Similar to GUI builders back in the day I assume (I did not do GUI). Really impressive for quickly generating an application, but difficult to maintain and customize. From my anecdotal experience, the more expert a programmer is, the less likely they are to use these wizards/templates ... why is that? Many companies had started off with this approach (the ActiveX templates to create OLE DB drivers in this case), and this got them 80% quickly, but then they became so stuck that they ended up coming to the company I worked for to buy a hand coded toolkit (also just another template :-)

    So templates can get you very far very fast for niche solutions, but deviating outside of the "model" can result in disaster -- again from my anecdotal experience. Are things different this time around for RoR?

    Ian
  279. BINGO![ Go to top ]

    You nailed RoR's weakness on the head: Wizard code.
    ....
     Are things different this time around for RoR?Ian
    I would say yes. With RoR it is different because RoR is a not that of a wizard but more like DSL (Domain Specific Language ).
    I am a big fan of DSL and believe that DSL is the Next Big Thing when tools like JetBrain’s MPS will mature enough ( the idea of DSL is around for decades and that verifies its viability IMO).

    In the meantime I would more trust ‘internal’ DSL simulation by the means of traditional languages which benefit enormously from the help of mature tool and infrastructure.

    Neal Ford hosted very good session on DSL at the last NFJS, highly recommend both.
  280. BINGO![ Go to top ]

    You nailed RoR's weakness on the head: Wizard code.
    ....&nbsp;Are things different this time around for RoR?Ian
    I would say yes. With RoR it is different because RoR is a not that of a wizard but more like DSL (Domain Specific Language ).

    DSL's are an interesting idea, I was not aware that Ruby could be used to write DSLs. That is a pretty nifty language feature -- I guess I will have to look into how it does this finally. Been putting off this whole investigating RoR thing ;-)
  281. DSL, BINGO![ Go to top ]

    DSL's are an interesting idea, I was not aware that Ruby could be used to write DSLs.
    J2EE folks are fond of bashing Ruby on language fundamentals, especially its typing. But Ruby has first-class parse customization that enables DSLs and leaves Java in the dust.
  282. BINGO![ Go to top ]

    I dont think that ROR is meant for doing anything in "ugly legacy enterprise environments"

    I was at JAOO where David Heinemeier Hansson gave a presentation of ROR. He was asked where he wouldnt use ROR and he mentioned among other things that it shouldnt be used with an existing DB. ROR is very good at starting up a project but not taking over a project.

    Also David said that he saw ROR as something between PHP and Java EE.
  283. Thanks for your thoughts.[ Go to top ]

    I didn't use PHP or Perl because I think they are quick and dirty. I think Ruby on Rails is quick and clean.
    So try PHP or Perl, probably it is not so dirty as you think ( but I am afraid it is too late to write best sellers about this stuff).
  284. PHP and Perl[ Go to top ]

    I didn't use PHP or Perl because I think they are quick and dirty. I think Ruby on Rails is quick and clean.
    So try PHP or Perl, probably it is not so dirty as you think ( but I am afraid it is too late to write best sellers about this stuff).

    I don't think history is on your side here. You don't have to try either one for very long to know that they are excellent prototyping languages, but will have trouble scaling. I think Ruby on Rails is different. It's much cleaner, from a design perspective, and just as productive...even more so.
  285. PHP and Perl[ Go to top ]

    I don't think history is on your side here. You don't have to try either one for very long to know that they are excellent prototyping languages, but will have trouble scaling. I think Ruby on Rails is different. It's much cleaner, from a design perspective, and just as productive...even more so.
    Actually, it seems to me that history is not really on your side either...

    PHP has thousands of visible applications out there (and who knows how many more internal), some of which very big, and I would argue that its scalability to cover 95% of possible web applications has been solidly established.

    Ruby on Rails is nowhere near there. And don't take this as a jab against it, I love Ruby and I love Ruby on Rails, but there's a difference between loving and falling in love :-)

    Ruby itself has some dire performance problems that will probably prove to be challenging for Ruby on Rails.

    Again: not saying that these problems will materialize, just that so far, neither Ruby nor RoR have really proved themselves in the field. And they probably never will it they don't get accepted at ISP's around the world.

    --
    Cedric
  286. PHP and Perl[ Go to top ]

    I didn't use PHP or Perl because I think they are quick and dirty. I think Ruby on Rails is quick and clean.
    So try PHP or Perl, probably it is not so dirty as you think ( but I am afraid it is too late to write best sellers about this stuff).
    I don't think history is on your side here. You don't have to try either one for very long to know that they are excellent prototyping languages, but will have trouble scaling. I think Ruby on Rails is different. It's much cleaner, from a design perspective, and just as productive...even more so.

    Good points. But I could simply replace C++ and Java, and change the date on your post, and it would apply. I'm watching many of the Java visionaries ramp up to do Ruby on Rails. Some of the people I trust the most, Martin Fowler, Dave Thomas, James Duncan Davidson, Mike Clark and David Geary all are going to make money this year on Ruby on Rails. Some still do some Java, and others don't.

    Re. PHP and Perl, both have enormous maintenance problems. Both can be done well; both usually aren't. I think you know that and are being argumentative. The Perl community is being gutted as we speak by Perl. And PHP makes it easier to do the wrong thing than the right thing. Ruby on Rails is quick, and clean.
  287. PHP and Perl[ Go to top ]

    Re. PHP and Perl, both have enormous maintenance problems. Both can be done well; both usually aren't. I think you know that and are being argumentative. The Perl community is being gutted as we speak by Perl. And PHP makes it easier to do the wrong thing than the right thing. Ruby on Rails is quick, and clean.
    I see Ruby is more anti PHP than anti JAVA. It has the same problems as any "script" but it is not as mature as PHP and has no chances to become more popular than mature scripting languages. Do you make noise about JAVA vs Ruby for this reason ?
  288. PHP and Perl[ Go to top ]

    You don't have to try either one [PHP and Perl] for very long to know that they are excellent prototyping languages, but will have trouble scaling.
    Bruce, I wonder what you mean by "scaling". I respect you a lot and hence refuse to believe that you mean "scale" as in performance. Because, that's a terribly warn-out cliche and a lie, at that. Mediawiki is written in PHP. Wikipedia is running Mediawiki.

    Can many people stand up over here and claim they know many systems that "scales" (as in - performance) than Mediawiki? I think, only fools would dare.

    So, Bruce, I guess, when you were talking about "scaling" you meant - large applications as in "large code-base". Because, be it heard by everybody - PHP's main problem is not performance, PHP's main problem is its ugliness.

    It is a prototyping language, as you well said but that's all it is. Beyond that - it is ugly and it leaves a terrible taste to develop anything of a significant size in PHP. Possible - but really unpleasant and ugly to maintain.

    I think it would be great if you clarified what you meant by "scaling" to avoid confussion.

    thanks
  289. PHP and Perl[ Go to top ]

    So, Bruce, I guess, when you were talking about "scaling" you meant - large applications as in "large code-base". Because, be it heard by everybody - PHP's main problem is not performance, PHP's main problem is its ugliness.It is a prototyping language, as you well said but that's all it is. Beyond that - it is ugly and it leaves a terrible taste to develop anything of a significant size in PHP. Possible - but really unpleasant and ugly to maintain.
    I do not think PHP is as ugly as Ruby, Ruby is the most terrible and ugly to maintain script.
  290. PHP and Perl[ Go to top ]

    You don't have to try either one [PHP and Perl] for very long to know that they are excellent prototyping languages, but will have trouble scaling.
    Bruce, I wonder what you mean by "scaling". I respect you a lot and hence refuse to believe that you mean "scale" as in performance. Because, that's a terribly warn-out cliche and a lie, at that. Mediawiki is written in PHP. Wikipedia is running Mediawiki. Can many people stand up over here and claim they know many systems that "scales" (as in - performance) than Mediawiki? I think, only fools would dare.So, Bruce, I guess, when you were talking about "scaling" you meant - large applications as in "large code-base". Because, be it heard by everybody - PHP's main problem is not performance, PHP's main problem is its ugliness.It is a prototyping language, as you well said but that's all it is. Beyond that - it is ugly and it leaves a terrible taste to develop anything of a significant size in PHP. Possible - but really unpleasant and ugly to maintain.I think it would be great if you clarified what you meant by "scaling" to avoid confussion.thanks

    Sorry. I should have been more clear. Profuse appologies; the performance of both power a huge amount of the dynamic Internet today. I'm talking about scaling as in complexity. Perl has too many secret hand shakes; too tough to maintain. PHP does not have the separation of model and view that's necessary. You can do right with both, but it's too easy to do wrong with both.
  291. Thanks for your thoughts.[ Go to top ]

    I'll blog a little more about my experiences today, but I thought I should talk about them some here first, because this community has meant so much to me. I should probably point out what I have to lose.- All of my clients are Java clients.- I've got 3 JavaOne best sellers in 4 years. All of my best books are about Java.- I've worked to establish a brand in the Java business. Indeed, my company name, is based on the Java brand.Look...no technical book is going to make an author a lot of money. Generally, for a book like this, I'll get less than two bucks a book. A very good Java book sells around 10,000 copies, and if you put much effort into it at all, you can see that you'd make more money flipping burgers. Writing is about the love of the craft. So you should be asking yourself, why would a Java evangilist cut his own throat like this? And the answer is this: for my readers, I think it's the right thing to do. For at least one class of applications, web-based apps on a relational database where you control your own schema, you'd be crazy not to consider Rails. It's just too productive. I didn't understand either until I used the framework in anger. What's different?- Rapid turnaround time. Save and reload. That's it. Very compelling.- Starting point. You start with basically a working CRUD app per table. Windows to do list, show, edit, delete, new. Sure, you'll rewrite some, but you can put something in front of your customer instantly. That's compelling. To get all of this, you can either do a one-time code generation pass (it's not that much code in Rails), or you can do a scaffold :person command. - Reduced configuration. For us it was a 10-1 reduction. I think that's fairly typical. More for some extreme struts apps, less if you don't count annotations (but I think we're often making a big mistake with annotations.)- Better mapping strategy for the Rails problem set. Here's an active record class, that maps a Person class to a table called people, and belongs to a department:class Person < ActiveRecord::Base&nbsp;&nbsp;belongs_to :departmentendEnd of story. You want syntax? That's syntax. You get a property for every column in the database, custom finders like find_by_last_name, and other goodies to manage the belongs_to relationship. No repetition of column names. No code generation. - Much better AJAX, and cleaner view technology. As to quick and dirty, I used Java because it was clean, although slow. I didn't use PHP or Perl because I think they are quick and dirty. I think Ruby on Rails is quick and clean.But it's not for every Java project. I think that several areas are basically safe for Java: hard core ORM, heavy threading, two-phased commit. Rails is for one well-defined niche, but it's probably the most important one, and that niche is rapidly expanding. More comments in my blog today...stay tuned.

    Here's is my problem. I rarely want my class attributes names to mirror my database columns. When we've had to support bizzare databases with ungodly poor databse names, the LAST thing I want is the be tied to it. So what happens to that example when you add mapping that DB name to something more intuitive. I'm pretty sure any of the OR mappers allow you to reduce the configuration if you want to do straight mapping. Worst case, generate it and you never have to look at it.

    I stand by previous statements. I'd rather use something that takes a little more work for smaller projects but scales up to the larger ones. I am on the last legs of a project with two new people. They both ramped up very quickly BECAUSE we are using Struts, Spring, and Hibernate(SSH). In addition, I can put anyone on one of the smaller 3 page apps using SSH and they can move to the one I'm finishing and they just have to understand the core business logic, because structurally, the apps are the same.

    I'd rather people be portable across many projects. Despite the attempts, I just haven't seen a single clear advantage that Ruby is bringing to the table.
  292. Thanks for your thoughts.[ Go to top ]

    For that project, I think I would have chosen the same technologies. When you need to map, use a mapper. Rails uses a wrapping strategy.
  293. Thanks for your thoughts.[ Go to top ]

    For that project, I think I would have chosen the same technologies. When you need to map, use a mapper. Rails uses a wrapping strategy.

    But why use Rails? I mean, I've moved from C->C++->Java, so I'm not married to Java! Why use Rails for one and Java for another when the Java stuff can be coded *almost* as fast. I mean, we've seen the same arguments for say ColdFusion and Java, so Rails is nothing new on that front.

    "Code twice as fast in half the code!!"

    If you are willing to live within these confines. "But the confines aren't that bad.", says Person.

    What exactly is the advantage of using Ruby, PHP, and Java for three different projects as opposed to one common code base where the common-code gets tested and improved for each project?

    Raw coding speed? I'm asking...why introduce a new element? What am I gaining over what I have?
  294. 10:1 compression of config?[ Go to top ]

    Is this because you get to "cheat" and make code configuration? I have been playing with groovy, and by using GroovyMarkup, I can make config files that are automatically compiled to complex structures, since the config files are basically code scripts that are just defaulting variables.

    What would you attribute the 10:1 compression of config? Because most config in java frameworks might not be useful to you, but it is there for a reason.

    The big danger of dynamic compilation or interpretative langs is greying the code-configuration difference too much. Then systems become difficult to deploy and tweak, without having to risk changing code that may have ramifications elsewhere.
  295. 10:1 compression of config?[ Go to top ]

    Convention over configuration. Look at a typical Spring context and tell me how much could be defaulted. And I think that Spring one of the most productive Java frameworks. Look at the amount of Struts configuration related to navigation. Could you short circuit that?

    When you make assumptions about naming and navigation, and suggest those conventions, you only have to break your rules a small fraction of the time.
  296. 10:1 compression of config?[ Go to top ]

    Convention over configuration. Look at a typical Spring context and tell me how much could be defaulted. And I think that Spring one of the most productive Java frameworks. Look at the amount of Struts configuration related to navigation. Could you short circuit that? When you make assumptions about naming and navigation, and suggest those conventions, you only have to break your rules a small fraction of the time.

    I'm with you on streamlining by obeying conventions.
  297. Thanks for your thoughts.[ Go to top ]

    For at least one class of applications, web-based apps on a relational database where you control your own schema, you'd be crazy not to consider Rails.
    I would say that for this and all other types of web applications it would be crazy not to consider
    DreamWeaver + Tapestry+ (Spring|HiveMind)+ Hibernate+ (Eclise+Spindle|IntellijIDEA) combo.

    DW allows creating entire UI for the application in the DW and present/discuss/change it with customers! – before anything else is done – enormous time saver.

    Tapestry allows using the DW output for development ( VERY noticeable difference from PHP/JSP when developers have to redo everything based on designers input)

    Tapestry allows using DW for maintaining page and component templates (wow!).

    (Spring|HiveMind) – does the plumbing

    Hibernate – takes care of your database schema if we own the schema or allows us to play nicely with whatever schema is forced on us.

    (Eclise+Spindle|IntellijIDEA) – enormously help with code creation and navigation – things which do not exist in the world of RoR.


    PS: Thing to consider: RoR does not have decent i18n support...
  298. Thanks for your thoughts.[ Go to top ]

    For at least one class of applications, web-based apps on a relational database where you control your own schema, you'd be crazy not to consider Rails. It's just too productive.

    Then call me crazy! I have had just such an application. I used a well-regarded scripting language (Cold Fusion), as I had to take over the website and this is what the previous developers has used.

    The problem is that the website (not just the schema) evolved. It required some high-performance database access and cacheing of data. It required some computation for things like best-offer calculation. Then, one day, the website was publicised in the media, and it died. It could not cope with the demand. Fortunately I had been working on a port to a Java-based version. Sooner than I had hoped, this port was put on-line, and the performance problems were solved.

    Since then I have refused to go near a scripting or low-performance language for the development of a website where there is even the slightest possibility of increased requirements.
  299. Thanks for your thoughts.[ Go to top ]

    For at least one class of applications, web-based apps on a relational database where you control your own schema, you'd be crazy not to consider Rails. It's just too productive.
    Then call me crazy! I have had just such an application. I used a well-regarded scripting language (Cold Fusion), as I had to take over the website and this is what the previous developers has used.The problem is that the website (not just the schema) evolved. It required some high-performance database access and cacheing of data. It required some computation for things like best-offer calculation. Then, one day, the website was publicised in the media, and it died. It could not cope with the demand. Fortunately I had been working on a port to a Java-based version. Sooner than I had hoped, this port was put on-line, and the performance problems were solved.Since then I have refused to go near a scripting or low-performance language for the development of a website where there is even the slightest possibility of increased requirements.

    Exactly. Why not, instead, start with a base that you KNOW can scale up. I told our management that I can do anything Coldfusion can, at least *almost* as quickly. At least. Except, any of our java guys can maintain this project. And if it grows, the systems will grow with it.
  300. Thanks for your thoughts.[ Go to top ]

    For at least one class of applications, web-based apps on a relational database where you control your own schema, you'd be crazy not to consider Rails. It's just too productive.
    Then call me crazy! I have had just such an application. I used a well-regarded scripting language (Cold Fusion), as I had to take over the website and this is what the previous developers has used.The problem is that the website (not just the schema) evolved. It required some high-performance database access and cacheing of data. It required some computation for things like best-offer calculation. Then, one day, the website was publicised in the media, and it died. It could not cope with the demand. Fortunately I had been working on a port to a Java-based version. Sooner than I had hoped, this port was put on-line, and the performance problems were solved.Since then I have refused to go near a scripting or low-performance language for the development of a website where there is even the slightest possibility of increased requirements.
    Exactly. Why not, instead, start with a base that you KNOW can scale up. I told our management that I can do anything Coldfusion can, at least *almost* as quickly. At least. Except, any of our java guys can maintain this project. And if it grows, the systems will grow with it.

    Ruby is much more dynamic, and handles complexity very well. It's a model-view-controller style design pattern. It defaults everything, but many Java developers confuse this with hard wiring. It isn't. And Ruby metaprogramming gives me things Cold Fusion could only dream about.
  301. Thanks for your thoughts.[ Go to top ]

    For at least one class of applications, web-based apps on a relational database where you control your own schema, you'd be crazy not to consider Rails. It's just too productive.
    Then call me crazy! I have had just such an application. I used a well-regarded scripting language (Cold Fusion), as I had to take over the website and this is what the previous developers has used.The problem is that the website (not just the schema) evolved. It required some high-performance database access and cacheing of data. It required some computation for things like best-offer calculation. Then, one day, the website was publicised in the media, and it died. It could not cope with the demand. Fortunately I had been working on a port to a Java-based version. Sooner than I had hoped, this port was put on-line, and the performance problems were solved.Since then I have refused to go near a scripting or low-performance language for the development of a website where there is even the slightest possibility of increased requirements.
    Exactly. Why not, instead, start with a base that you KNOW can scale up. I told our management that I can do anything Coldfusion can, at least *almost* as quickly. At least. Except, any of our java guys can maintain this project. And if it grows, the systems will grow with it.
    Ruby is much more dynamic, and handles complexity very well. It's a model-view-controller style design pattern. It defaults everything, but many Java developers confuse this with hard wiring. It isn't. And Ruby metaprogramming gives me things Cold Fusion could only dream about.

    But ColdFusion is simpler, right? If I need to do things that ColdFusion can only dream about I'm still back at java.

    I have to tell you, I'm just not convinced. If Ruby handles complexity so well, why is everyone talking about CRUD operations and prototyping small apps.

    I just don't buy it. My interpretation of the examples that you and others have given indicate to me that handles simple cases simply and would not handle grow well. You yourself said that if you need to map class attributes to names that are different that database columsn that use an ORM.

    That is a very reasonable example I gave that already appear to neutralize the terseness of the example you gave.

    Ah well, I guess we'll all see how it plays out...
  302. Thanks for your thoughts.[ Go to top ]

    Ruby is much more dynamic, and handles complexity very well. It's a model-view-controller style design pattern. It defaults everything, but many Java developers confuse this with hard wiring. It isn't. And Ruby metaprogramming gives me things Cold Fusion could only dream about.

    This is completely missing the point about my original post about Cold Fusion. Dealing with complexity was not the issue. The issue was performance. This is same reason I gave up (to my deep regret) Smalltalk years ago. For me a reasonable development language has to have four properties, and must not compromise to any degree on any of them:

    1. It must have real cross-platform portability at the point of deployment (I don't want to have to produce and support multiple versions for different platforms).
    2. It must be free.
    3. It must be fast (and by this, I mean close to C or C++ speed).
    4. There must be good IDEs for the language.

    With any given Smalltalk implementation, I could get two or three of those, but not all four. With Java I do not have to compromise. Ruby (and other scripting languages) tend to fail badly on 3 and 4.

    As far as I can tell, Ruby has a some way to go before it reaches anything like the development environment capabilities of Smalltalk as it was decades ago, let alone gaining the speed of C or Java.
  303. Thanks for your thoughts.[ Go to top ]

    You start with basically a working CRUD app per table. Windows to do list, show, edit, delete, new. Sure, you'll rewrite some, but you can put something in front of your customer instantly. That's compelling.
    Methodologically speaking, that's an executable model. Didn't Sun's Rave and Ace projects already have that fully tooled before RoR? Also other open source projects predated RoR, such as middlegen and JAG.
    Much better AJAX, and cleaner view technology.
    Interesting.
  304. Thanks for your thoughts.[ Go to top ]

    RoR's specific niche is a very dangerous one, which may be too narrow for too many types of projects: if app grows too complex, which BTW is what I've seen in 90% of the projects I've worked on, I wonder how will RoR behave. Will it have too many growing pains? If so, the immediate gains would be lost down the road when people try to maintain the unmaintainable, as further on your system will need to integrate with other systems, migrate to a diferent database, etc, as usually happens specially in big corporates. At least where I work, this happens very often. Does someone know how RoR compares to Java WRT maintainability and scalability?
  305. Thanks for your thoughts.[ Go to top ]

    RoR's specific niche is a very dangerous one, which may be too narrow for too many types of projects: if app grows too complex, which BTW is what I've seen in 90% of the projects I've worked on, I wonder how will RoR behave. Will it have too many growing pains? If so, the immediate gains would be lost down the road when people try to maintain the unmaintainable, as further on your system will need to integrate with other systems, migrate to a diferent database, etc, as usually happens specially in big corporates. At least where I work, this happens very often. Does someone know how RoR compares to Java WRT maintainability and scalability?

    The structure of RoR applications is much cleaner than Java counterparts. I'd guess the maintenance will be simpler, but I can't say for sure.
  306. Thanks for your thoughts.[ Go to top ]

    RoR's specific niche is a very dangerous one, which may be too narrow for too many types of projects: if app grows too complex, which BTW is what I've seen in 90% of the projects I've worked on, I wonder how will RoR behave. Will it have too many growing pains? If so, the immediate gains would be lost down the road when people try to maintain the unmaintainable, as further on your system will need to integrate with other systems, migrate to a diferent database, etc, as usually happens specially in big corporates. At least where I work, this happens very often. Does someone know how RoR compares to Java WRT maintainability and scalability?
    The structure of RoR applications is much cleaner than Java counterparts. I'd guess the maintenance will be simpler, but I can't say for sure.
    That can be said of VB programs too, but everyone knows what we got from that (in)famous language/tool. Let's wait and see if RoR will live up to its expectations.
  307. Thanks for your thoughts.[ Go to top ]

    That can be said of VB programs too, but everyone knows what we got from that (in)famous language/tool. Let's wait and see if RoR will live up to its expectations.

    I don't like the structure of most VB applications. 10K of code hanging off of an OK button, over and over. The ui and the tool encourage you to tightly couple view and model logic. I don't see the same with Ruby on Rails.
  308. Don't abandon Java just yet[ Go to top ]

    I think it's a pity that Java developers are so quickly grasping for Ruby on Rails without even considering that the full J2EE/Spring stack isn't the only possible way to develop web applications in Java. There are many projects in the Java community that strive for the same benefits without developers having to leave behind the language, the toolset, the libraries and the enterprise features if you need them. I can only speak for our own project, RIFE, by going over the "what's different" points that Bruce wrote in the beginning of this thread:

    * Rapid turnaround time

    RIFE only needs restarts when you change your back-end APIs by introducing new methods or changing existing signatures. This is because the Java hot-swap has got this limitation. The actual declarations, elements (our actions), templates, dependencies ... are all reloaded on-the-fly and allow you to work in a very iterative fashion. Besides that, applications generally start up in a few seconds since they only require a basic servlet container like Resin, Jetty or Tomcat. You thus get most of the benefits here.

    * Starting point

    RIFE/Crud provides a similar approach to scaffolding but without code generation and completely at run-time. Through a rich meta-data API (constraints), you're even able to handle image uploads, XHTML validation, content transformation, etc etc without writing more that a single statement.
    I think that the run-time generation is much more powerful since you can actually base yourself on the automated functionalities without having to manually maintain any generated code if a new version of the scaffolding comes out.

    * Reduced configuration

    There is a lot of preference involved here. I agree that XML configuration went overboard in many Java frameworks, but that shouldn't be mistaken with declaration. Even though I value quick results, I think it's extremely important to have one centralized declaration where all my elements, URLs, filters, flows, etc. are clearly available. Sure that might require one or two additional lines of code once in a while, but in the end clarity and maintainability gets a lot better. Apart from that, our site-structure declaration provides complete knowledge about all state transitions including data and logic. This is used by the engine to provide advanced features like behavioral inheritance, embedded elements (portlets), URL localization, ...
    Sometimes you can't get something for nothing.

    * Better mapping strategy

    You have the 'association' or 'manyToOne' constraints in our meta-data API that do exactly the same. Personally I cringe when I see class properties being auto-generated from database tables. I think that's the wrong way 'round. We support optional auto-generation of the database tables from your domain objects according to the meta-data your declared there. That is really a single point of modification since everything is driven from there. You don't have bits that rely on the limited constraints that database tables provide and other bits that rely on meta-data that sits in your domain objects.

    * Much better AJAX, and cleaner view technology

    I can't speak for AJAX since I haven't done much with it, I'm an OpenLaszlo guy. As for view technology, Ruby's rhtml is anything but clean. Templates are once again not real templates and contain logic. Admittedly they contain very little logic, but they are not dumb layout blueprints. RIFE's template engine has a very simple, but unique design which doesn't require any logic in the template at all. Ruby's view technology is maybe quick, but it isn't clean.

    Finally, I encourage people that are interested in RIFE or that think they know what it does from seeing a couple of examples to read the 'RIFE misconceptions' blog post I wrote a week ago. This will save everyone a lot of time ;-)

    Don't throw Java away yet if you're looking for simpler approaches to develop (web) applications, looking at non-mainstream projects might just give you what you're looking for.
  309. Don't abandon Java just yet[ Go to top ]

    I think it's a pity that Java developers are so quickly grasping for Ruby on Rails

    I don't think there is any evidence that they are. All that is happening is that Ruby on Rails hype is being widely discussed.
    Personally I cringe when I see class properties being auto-generated from database tables. I think that's the wrong way 'round.

    +1
  310. Don't abandon Java just yet[ Go to top ]

    I think it's a pity that Java developers are so quickly grasping for Ruby on Rails
    I don't think there is any evidence that they are. All that is happening is that Ruby on Rails hype is being widely discussed.
    Personally I cringe when I see class properties being auto-generated from database tables. I think that's the wrong way 'round.
    +1

    I firmly believe auto-generation in either direction is just plain wrong. Maybe some fancy middlegen tool can create both from a common definition, but I think that if you are going to create a db schema from java objects or create java objects from a db schema then you are just going to make a mess of one of them. Just phoning in the database schema design is a sure way to cripple a project.

    Taking the extra time to do both right will pay off dividens for YEARS to come for most projects. Sure, it doesn't mean anything for a throw away demo. But how many of those "throw-aways" suddenly become the initial production version.
  311. Don't abandon Java just yet[ Go to top ]

    I think it's a pity that Java developers are so quickly grasping for Ruby on Rails
    I don't think there is any evidence that they are. All that is happening is that Ruby on Rails hype is being widely discussed.
    Personally I cringe when I see class properties being auto-generated from database tables. I think that's the wrong way 'round.
    +1
    I firmly believe auto-generation in either direction is just plain wrong. Maybe some fancy middlegen tool can create both from a common definition, but I think that if you are going to create a db schema from java objects or create java objects from a db schema then you are just going to make a mess of one of them. Just phoning in the database schema design is a sure way to cripple a project.Taking the extra time to do both right will pay off dividens for YEARS to come for most projects. Sure, it doesn't mean anything for a throw away demo. But how many of those "throw-aways" suddenly become the initial production version.

    Absolutely!! This is why I'm such a hardass about this. I've seen too many prototypes demo'ed to customers who said "If you had just this, it'll be perfect."

    After all this, I refuse to hack out anything because I *know* what will happen. I've yet to see anyone go, I'm sorry you've done this application right.
  312. Don't abandon Java just yet[ Go to top ]

    I firmly believe auto-generation in either direction is just plain wrong. Maybe some fancy middlegen tool can create both from a common definition, but I think that if you are going to create a db schema from java objects or create java objects from a db schema then you are just going to make a mess of one of them. Just phoning in the database schema design is a sure way to cripple a project.Taking the extra time to do both right will pay off dividens for YEARS to come for most projects. Sure, it doesn't mean anything for a throw away demo. But how many of those "throw-aways" suddenly become the initial production version.

    I agree with that, it's handy though for very trivial applications. Anything a bit more complex should of course be properly designed.
  313. Thanks for your thoughts.[ Go to top ]

    My contention in using RoR/Ruby are (I know almost nothing about RoR):
    1. code reuse. I have a bunch of stuff that I developed myself (utilitiy classes, mini framework, code generator, etc) in my 6 years of development in Java, besides the bunch of 3rd party libs/tools available that I am using. Do I have to develop this again if I use RoR/Ruby for "suitable projects"?

    2. application life span. In a few occations, the simple application that I developed, slowly became bigger and bigger. I worry that if I started a project using RoR/Ruby for "suitable projects", the future unforseen phases after that requires it to grow into "unfamiliar" areas that RoR is not very suited.

    These issues would be a no issue if jRuby is mature enough and has a "jRoR" :)

    I am still looking for a powerful dynamic language to complement my suite of tools to give me the added flexibility and productivity in development.
  314. Jython[ Go to top ]

    Well, Jython (an implementation of Python in Java) can reuse your Java code, and Jython classes can be used from Java. I for one find Python a much cleaner language than Ruby, and it's just as dynamic.

    Jython stagnated for a few years, but things seem to be picking up steam, and there is a new release on the horizon.

    See http://wiki.python.org/jython/ and http://jython.org for more info, but here's a snippet that shows use of spring:

    Jython 2.2a1 on java1.5.0_05 (JIT: null)
    Type "copyright", "credits" or "license" for more information.
    >>> # i didn't set class path before starting, so change it here:
    >>> import sys
    >>> sys.path.append('spring-1.2.2.jar')
    >>> sys.path.append('commons-logging-1.0.4.jar')
    >>> beanFactory = DefaultListableBeanFactory()
    >>> print beanFactory.getBeanDefinitionCount()
    0
    >>>

    That's an empty bean factory, but you get the idea...
  315. Jython[ Go to top ]

    Oops, I missed a line cutting and pasting. In case anybody tried this and noticed it didn't work, before instantiating the BeanFactory, you need to do "from org.springframework.beans.factory.support import DefaultListableBeanFactory".

    Apologies...
  316. Jython[ Go to top ]

    Well, Jython (an implementation of Python in Java) can reuse your Java code, and Jython classes can be used from Java.

    I think Jython is the top JVM language right now. Groovy included.
  317. under $2 a book?[ Go to top ]

    Bruce, A few years ago, I purchased "Bitter Java" for about $50.

    I am disappointed to hear that out of that $50, you would receive less than 2 bucks (although probably not as disappointed as you!).

    Maybe you should consider self-publishing and cut out all the gunk between your thoughts and ideas and the people like me who want to read them. I am sure you contributed more than 4% of the value to the book - you should receive more than 4% of what I am willing to pay for it.

    I am currently working on self-publishing audio training for developers (lots of friends have a long commute to work). Not much yet, only one produced so far, but should have more over the next few months:
      
    http://www.developeradvantage.com/products.html

    -Don
  318. Beyond Java[ Go to top ]

    they were able to recreate the application in Ruby in 4 nights.

    how long did they worked on the original , 2 nights;-) Now serious, this statement doesn't mean anything. The second time you build something it goes always faster, regardless wherein. I do agree though with "the next thing". It is coming, it is there already but it hasn't become "the thing" yet.

    I'am getting really tired about comments saying that fast development doesn't mean anything. It does mean a lot. Why should we wait and wait and wait forever, i'v waited enough, more than enough. Make that hotswap accept API changes, for goodness sake PLEASE!
  319. On Java![ Go to top ]

    I'am getting really tired about comments saying that fast development doesn't mean anything. It does mean a lot. Why should we wait and wait and wait forever, i'v waited enough, more than enough. Make that hotswap accept API changes, for goodness sake PLEASE!

    The real question we should be asking is this: Why isn't the JVM the best platform for Ruby to be running on, and how do we "fix" it so it is?

    Peace,

    Cameron Purdy
    Tangosol Coherence: The Java Data Grid
  320. +1[ Go to top ]

    I'am getting really tired about comments saying that fast development doesn't mean anything. It does mean a lot. Why should we wait and wait and wait forever, i'v waited enough, more than enough. Make that hotswap accept API changes, for goodness sake PLEASE!

    The real question we should be asking is this: Why isn't the JVM the best platform for Ruby to be running on, and how do we "fix" it so it is?
    Peace,
    Cameron PurdyTangosol Coherence: The Java Data Grid

    that's the question I asked myself when the thread first appeared. clearly, the JVM can support a language like jython, so it should be able to support ruby. I'm sure the .NET team is working hard to make the CLR more friendly towards scripting languages like IronPython. Would anonymous types in .NET make it easier to write a ruby interpreter? Maybe someone from the jython or groovy team can shed some light.

    peter lin
  321. +1[ Go to top ]

    I'am getting really tired about comments saying that fast development doesn't mean anything. It does mean a lot. Why should we wait and wait and wait forever, i'v waited enough, more than enough. Make that hotswap accept API changes, for goodness sake PLEASE!
    The real question we should be asking is this: Why isn't the JVM the best platform for Ruby to be running on, and how do we "fix" it so it is?Peace,Cameron PurdyTangosol Coherence: The Java Data Grid
    that's the question I asked myself when the thread first appeared. clearly, the JVM can support a language like jython, so it should be able to support ruby. I'm sure the .NET team is working hard to make the CLR more friendly towards scripting languages like IronPython. Would anonymous types in .NET make it easier to write a ruby interpreter? Maybe someone from the jython or groovy team can shed some light.peter lin

    Its pretty straight forward to port any language to the JVM - look at how many languages there already are.

    http://www.robert-tolksdorf.de/vmlanguages.html

    The only real issue is how the language & the JVM work together at the object/API level. What we tried really hard was to make Groovy a Ruby-ish language which mapped directly to regular Java objects with a minimum amount of impedence mismatch between the language objects and the JVM objects and to reuse as much of the JVM's APIs (Collections, Strings, Comparable etc).

    Its pretty simple and easy to add new languages to the JVM. I'm a bit surprised that developers in the Ruby/Python worlds spend so much time on their own VMs when the JVM is so darned useful (those virtual method call inlines are really handy along with great garbage collection).

    James
    LogicBlaze
  322. +1[ Go to top ]

    Use JRuby ! and support them. It actually works (no continuations yet).
  323. On Java![ Go to top ]

    The real question we should be asking is this: Why isn't the JVM the best platform for Ruby to be running on, and how do we "fix" it so it is?

    I don't see that as the issue.

    In my opinion, for every thing dynamic languages give you, they also take something away.

    Depending on your application you may be better or worse off for the trade.

    I want the best of both worlds.

    I've used Python a lot, and I don't buy the argument that dynamic typing makes things easier. Maybe I'm just too stuck in the C++/Java way of thinking, but I like having static types. I also like metaprogramming. In fact, I think metaprogramming would be more powerful with static typing than without.
  324. +1[ Go to top ]

    I'am getting really tired about comments saying that fast development doesn't mean anything. It does mean a lot. Why should we wait and wait and wait forever, i'v waited enough, more than enough. Make that hotswap accept API changes, for goodness sake PLEASE!
    The real question we should be asking is this: Why isn't the JVM the best platform for Ruby to be running on, and how do we "fix" it so it is?Peace,Cameron PurdyTangosol Coherence: The Java Data Grid

    I'm very interested in JRuby. I'm about ready to start promoting unit tests for Java in the Ruby language. JRuby is not there yet, but it's getting very close. I think it's passing more than 90% of the existing Ruby test cases now.
  325. Ruby Performance was +1[ Go to top ]

    I'am getting really tired about comments saying that fast development doesn't mean anything. It does mean a lot. Why should we wait and wait and wait forever, i'v waited enough, more than enough. Make that hotswap accept API changes, for goodness sake PLEASE!
    The real question we should be asking is this: Why isn't the JVM the best platform for Ruby to be running on, and how do we "fix" it so it is?Peace,Cameron PurdyTangosol Coherence: The Java Data Grid
    I'm very interested in JRuby. I'm about ready to start promoting unit tests for Java in the Ruby language. JRuby is not there yet, but it's getting very close. I think it's passing more than 90% of the existing Ruby test cases now.

    I've no personal experience with this but the buzz on the street this that Ruby performance sets you back to the JDK 1.0. Certianly users are not going to be satisfied with taking that size of a performance hit without some grumbling no matter how must faster you get the application into their hands.
  326. Ruby Performance was +1[ Go to top ]

    Some discussion on the subject by people who do have personal experience with this.

    Is Ruby Better Than..?
  327. Ruby Performance was +1[ Go to top ]

    I'am getting really tired about comments saying that fast development doesn't mean anything. It does mean a lot. Why should we wait and wait and wait forever, i'v waited enough, more than enough. Make that hotswap accept API changes, for goodness sake PLEASE!
    The real question we should be asking is this: Why isn't the JVM the best platform for Ruby to be running on, and how do we "fix" it so it is?Peace,Cameron PurdyTangosol Coherence: The Java Data Grid
    I'm very interested in JRuby. I'm about ready to start promoting unit tests for Java in the Ruby language. JRuby is not there yet, but it's getting very close. I think it's passing more than 90% of the existing Ruby test cases now.
    I've no personal experience with this but the buzz on the street this that Ruby performance sets you back to the JDK 1.0. Certianly users are not going to be satisfied with taking that size of a performance hit without some grumbling no matter how must faster you get the application into their hands.

    At least, not without compensation, like incredible productivity. And for the target niche, if you can cache well, the latency will be in the database. We've got some experience in this area.
  328. invokedynamic[ Go to top ]

    The real question we should be asking is this: Why isn't the JVM the best platform for Ruby to be running on, and how do we "fix" it so it is?
    See this for a start.
  329. thanks for the link[ Go to top ]

    The real question we should be asking is this: Why isn't the JVM the best platform for Ruby to be running on, and how do we "fix" it so it is?
    See this for a start.

    assuming this gets implemented in a future JVM, and there's a robust ruby interpreter for the JVM, wouldn't that mean java can accomodate ROR fans and provide an easier way to integrate java components?

    peter
  330. more links for ya[ Go to top ]

    assuming this gets implemented in a future JVM, and there's a robust ruby interpreter for the JVM, wouldn't that mean java can accomodate ROR fans and provide an easier way to integrate java components?peter
    Absolutely. And here are some more links to innovations coming someday to a virtual machine near you (in this case mostly the CLR, sorry, but we can expect similar stuff for the JVM too):
    Composable memory transactions
    Language-integrated query
    C-omega
    F-sharp, an ML-derived language for the CLR
    IBM's X10 programming language (targetting the JVM)
    All these things explain why I get annoyed at the sheer ignorance of people who imagine that a simple-minded little framework for building toy apps in a currently non-scalable language is where the true innovation is. Ha! Happy reading.
  331. more links for ya[ Go to top ]

    But don't those links suggest that there's more to life than the Java language? Im my book, I clearly state that the next successful language will need a credible JVM implementation.

    In fact, in a conversation with Dave Thomas this weekend, I told him that JRuby, with Rails, was the most important Ruby project going. I still believe that.
  332. more links for ya[ Go to top ]

    But don't those links suggest that there's more to life than the Java language? Im my book, I clearly state that the next successful language will need a credible JVM implementation.In fact, in a conversation with Dave Thomas this weekend, I told him that JRuby, with Rails, was the most important Ruby project going. I still believe that.
    There is definitely more to life than Java, we do agree on that. I just don't believe that RoR matters much in the greater scheme of things and I regret that it's given so much attention when there are so many more interesting and profound things to consider. Note also that all of the projects I mentioned target either the CLR or the JVM i.e. innovations happen in the development tools and languages but the runtime environment remains firmly in the continuity of what exists today. This is a key aspect I believe.
  333. more links for ya[ Go to top ]

    The concepts behind Ruby on Rails are radical and important. RoR may not be the next great thing, but metaprogramming is extremely important. Look at how much effort we're spending on metaprogramming. Spring, Hibernate, and Seam. AOP. Annotations. EJB. JDO. All of these frameworks have thousands of lines of code invested in metaprogramming. But Ruby, Smalltalk, Lisp and even Python to some extent make metaprogramming much easier.

    You say to look beyond the language to the JVM. I agree with the importance of the JVM, but I say to you "look beyond the tool to the language that enables it."

    RoR could well be just the first in a wave of metaprogramming frameworks. That, with the configuration, conventions and defaulting stragegies, are what has me so excited.

    The JVM needs to get more dynamic...closures, continuations, and dynamic typing are all very important to some of the most productive environments in the world.
  334. dynamic stuff[ Go to top ]

    The concepts behind Ruby on Rails are radical and important. RoR may not be the next great thing, but metaprogramming is extremely important. Look at how much effort we're spending on metaprogramming. Spring, Hibernate, and Seam. AOP. Annotations. EJB. JDO. All of these frameworks have thousands of lines of code invested in metaprogramming. But Ruby, Smalltalk, Lisp and even Python to some extent make metaprogramming much easier. You say to look beyond the language to the JVM. I agree with the importance of the JVM, but I say to you "look beyond the tool to the language that enables it."RoR could well be just the first in a wave of metaprogramming frameworks. That, with the configuration, conventions and defaulting stragegies, are what has me so excited.The JVM needs to get more dynamic...closures, continuations, and dynamic typing are all very important to some of the most productive environments in the world.
    On this last point we are in full agreement. This blog entry by Cedric Beust should also be of interest to you. If Sun needs 3 years to add a byte code to the JVM (if I may oversimplify a bit), then I'm afraid the fun is going to shift to .NET where innovation seems much faster nowadays...
  335. dynamic stuff[ Go to top ]

    The concepts behind Ruby on Rails are radical and important. RoR may not be the next great thing, but metaprogramming is extremely important. Look at how much effort we're spending on metaprogramming. Spring, Hibernate, and Seam. AOP. Annotations. EJB. JDO. All of these frameworks have thousands of lines of code invested in metaprogramming. But Ruby, Smalltalk, Lisp and even Python to some extent make metaprogramming much easier. You say to look beyond the language to the JVM. I agree with the importance of the JVM, but I say to you "look beyond the tool to the language that enables it."RoR could well be just the first in a wave of metaprogramming frameworks. That, with the configuration, conventions and defaulting stragegies, are what has me so excited.The JVM needs to get more dynamic...closures, continuations, and dynamic typing are all very important to some of the most productive environments in the world.

    On this last point we are in full agreement. This blog entry by Cedric Beust should also be of interest to you. If Sun needs 3 years to add a byte code to the JVM (if I may oversimplify a bit), then I'm afraid the fun is going to shift to .NET where innovation seems much faster nowadays...

    The trade off between moving forward and backward compatability is tricky. I'd argue it's much easier to screw that up than get it right. If Sun were to totally break the JVM compatability and introduce several new low level features, I'm sure there are plenty of businesses that would be furious. I'm sure there would be plenty of people who would be happy, but it's pretty hard to guess the reaction and impact. At the end of the day, if the developers of JRuby achieve their goals, then adding low level features might not amount to much.

    there are several LISP interpreters for Java, but the learning curve is rather steep. I don't know if metaprogramming is the future. My bias opinion is it will be a gradual shift. it's anyone's guess how long the process will take.

    peter lin
  336. dynamic stuff[ Go to top ]

    This blog entry by Cedric Beust should also be of interest to you. If Sun needs 3 years to add a byte code to the JVM (if I may oversimplify a bit), then I'm afraid the fun is going to shift to .NET where innovation seems much fas
    ter nowadays...
    The trade off between moving forward and backward compatability is tricky. I'd argue it's much easier to screw that up than get it right. If Sun were to totally break the JVM compatability and introduce several new low level features, I'm sure there are plenty of businesses that would be furious.
    The point in my blog entry was actually to observe that adding a bytecode to the JVM is completely backward compatible, as opposed to changes such as adding keywords to the language (assert, enum), which, even now, are still wreaking havoc on existing codebases.

    --
    Cedric
  337. dynamic stuff[ Go to top ]

    The point in my blog entry was actually to observe that adding a bytecode to the JVM is completely backward compatible, as opposed to changes such as adding keywords to the language (assert, enum), which, even now, are still wreaking havoc on existing codebases.-- Cedric

    That's an interesting point. Is there a point and time when forking and doing a new thing versus maintaining backwards compatibility makes more sense? I believe even in the Ruby space, "Matz" Matsumoto(the creator of Ruby) did a presentation on "How Ruby Sucks"

    http://www.rubyist.net/~matz/slides/rc2003/

    Where he feels that the best way to address problems and idiosyncracies are better served by going to a "nextgen" Ruby2 VM(aka RITE) with upfront admission of incompatibilities instead of kludging to maintain backwards compatibility.

    Maybe Java should go the same route to absorb new ideas gleaned from the Ruby, Python, PHP people and upfront say that any new generation of the language will have backward compatibility issues. It might allow for a more aggressive refactoring path for Sun to address all the issues mentioned by the community without the burden of legacy support.
  338. dynamic stuff[ Go to top ]

    This blog entry by Cedric Beust should also be of interest to you. If Sun needs 3 years to add a byte code to the JVM (if I may oversimplify a bit), then I'm afraid the fun is going to shift to .NET where innovation seems much faster nowadays...

    The trade off between moving forward and backward compatability is tricky. I'd argue it's much easier to screw that up than get it right. If Sun were to totally break the JVM compatability and introduce several new low level features, I'm sure there are plenty of businesses that would be furious.

    The point in my blog entry was actually to observe that adding a bytecode to the JVM is completely backward compatible, as opposed to changes such as adding keywords to the language (assert, enum), which, even now, are still wreaking havoc on existing codebases.-- Cedric

    I agree with that. adding a bytecode "should" be backward compatable, not that it necessarily is. adding new keywords definitely causes problems. I'm glad the decision isn't up to me. of course assert and enum never caused anyone to rewrite their code :)

    excuse my silly joke

    peter
  339. dynamic stuff[ Go to top ]

    The concepts behind Ruby on Rails are radical and important. RoR may not be the next great thing, but metaprogramming is extremely important. Look at how much effort we're spending on metaprogramming. Spring, Hibernate, and Seam. AOP. Annotations. EJB. JDO. All of these frameworks have thousands of lines of code invested in metaprogramming. But Ruby, Smalltalk, Lisp and even Python to some extent make metaprogramming much easier. You say to look beyond the language to the JVM. I agree with the importance of the JVM, but I say to you "look beyond the tool to the language that enables it."RoR could well be just the first in a wave of metaprogramming frameworks. That, with the configuration, conventions and defaulting stragegies, are what has me so excited.The JVM needs to get more dynamic...closures, continuations, and dynamic typing are all very important to some of the most productive environments in the world.
    On this last point we are in full agreement. This blog entry by Cedric Beust should also be of interest to you. If Sun needs 3 years to add a byte code to the JVM (if I may oversimplify a bit), then I'm afraid the fun is going to shift to .NET where innovation seems much faster nowadays...

    Well, I think MS will always have the advantage when it comes to tools. They control it all. Similarly, Apple can innovate faster for the OSX and its hardware. Imagine if MS said that they were dropping Intel for HP-Risc or some such.

    When you are open, you are slower. People accept that because you are open.
  340. dynamic stuff[ Go to top ]

    Sun has been rightfully conservative with Java. They've invested in the brand, and the notion that the brand is the language. It makes sense for them: control the language to some extent to control mind share, and sell more hardware, services and software. Java's Sun's biggest asset. It's not that they can't extend the JVM. It's that changes in the JVM are more dangerous for the language.

    Microsoft has another problem. They have invested in the brand of the operating system. They make another set of compromises, because they have a legacy to support, and the *want* to drive upgrades, and they need to support multiple languages to do so.

    With the competition fueled from .NET, you can see some movement in this area: embracing Groovy, supporting a JSR to upgrade the JVM for dynamic languages, and so on. Sun needs to stay aggressive. It shouldn't be about protecting the Java, the language. It should be all about protecting Java, the platform, and the JVM. But we're going to have to change some attitudes to do so. And I'm not so sure that the community wants to change...yet.
  341. more links for ya[ Go to top ]

    The JVM needs to get more dynamic...closures, continuations, and dynamic typing are all very important to some of the most productive environments in the world.

    Absolutely. I believe that Java (or some successor on the JVM) will evolve to add such features.

    My concern is the current emphasis on RoR, which (in my view) does so much wrong, that it distracts from the above message.
  342. How long to come up to speed?[ Go to top ]

    Obviously you did not port the app in 4 days AND learn ROR. How much time did you have to invest in the ROR learning curve to come up to speed so that you could port the app in 4 days?
  343. Continuations[ Go to top ]

    The JVM needs to get more dynamic...closures, continuations, and dynamic typing are all very important to some of the most productive environments in the world.

    I see this has just been published
    http://jakarta.apache.org/commons/sandbox/javaflow/

    I had a play with ATCT from Velare. It is certainly mind-blowing stuff but I fear such Java continuations could end up getting over-used. Let's not use them as a "golden hammer".

    Kit
  344. is the JVM right for Ruby[ Go to top ]

    Bruce Tate wrote:
    But don't those links suggest that there's more to life than the Java language? Im my book, I clearly state that the next successful language will need a credible JVM implementation.In fact, in a conversation with Dave Thomas this weekend, I told him that JRuby, with Rails, was the most important Ruby project going. I still believe that.

    The discussion on using the JVM (or the c#/dotnet-VM) has been done for Perl6 in the past. They decided to go for their own VM (parrot) instead, because this should be better equipped for executing dynamic languages. So they also agreed with the Python and Ruby guys that they would in the long term all three run on this same VM.

    The question is, how valid this point is for Ruby. Will JRuby remain in a significant disadvantage compared to Ruby in the long run? Is the JVM really wrong?

    Even if that is the case, if JRuby can achieve acceptable speed for some applications and if it is used with care, it might help for the transition.

    Another concern will off course be how compatible JRuby will be with Ruby. I think it is quite essential that these remain in sync.
  345. is the JVM right for Ruby[ Go to top ]

    The discussion on using the JVM (or the c#/dotnet-VM) has been done for Perl6 in the past. They decided to go for their own VM (parrot) instead, because this should be better equipped for executing dynamic languages. So they also agreed with the Python and Ruby guys that they would in the long term all three run on this same VM.
    I'm very excited about Parrot. Like Kaffe or GCJ, for running Java it's potentially another opensource rival of Sun's proprietary stranglehold on the JVM.

    Ruby and Python run on both the JVM and Parrot. So it's natural to ask which VM is faster for Ruby and Python. In theory Parrot could be faster at dynamic languages since that's what its instruction set was designed for. But I've never seen such a benchmark comparison of say JRuby vs Cardinal.

    I wonder whether Parrot's untested claim to be faster for dynamic languages would really be due to the relative clunkiness of the JVM instruction set. Or is it possible that if market demand were great enough that Sun could further internally optimized the JVM for dynamic languages without at all disturbing the JVM instruction set? Is Parrot's claim of faster-by-design dynamic language execution potentially just marketing hype?
  346. is the JVM right for Ruby[ Go to top ]

    I'm very excited about Parrot. Like Kaffe or GCJ, for running Java it's potentially another opensource rival of Sun's proprietary stranglehold on the JVM.

    Can't be much of a stranglehold if others are implementing it (Kaffe). Surely there is nothing to stop anyone implementing the JVM but adding additional instructions to support dynamic languages?
    Ruby and Python run on both the JVM and Parrot. So it's natural to ask which VM is faster for Ruby and Python. In theory Parrot could be faster at dynamic languages since that's what its instruction set was designed for.

    The way I see this, things are very much going to be theoretical for some time, as Parrot is still only at version 0.4.0, and the Roadmap on the website says: "As of this writing, there is no one place in which all remaining parrot work is documented." It certainly isn't vapourware, but seems to have quite a way to go before it is finished.

    Presumably, if these languages could be compiled to Java byte codes on the JVM (this is on the JRuby roadmap), they would get the huge performance boost of Hotspot, which should give them a major advantage over Parrot for some time to come.

    Personally, I am excited about dynamic languages on the JVM, as no matter what the disadvantages, it is a VM that is available for production use.
    Or is it possible that if market demand were great enough that Sun could further internally optimized the JVM for dynamic languages without at all disturbing the JVM instruction set?

    I believe there is such a proposal (well, one instruction would be added).
  347. is the JVM right for Ruby[ Go to top ]

    I'm very excited about Parrot. Like Kaffe or GCJ, for running Java it's potentially another opensource rival of Sun's proprietary stranglehold on the JVM.
    Can't be much of a stranglehold if others are implementing it (Kaffe). ... Presumably, if these languages could be compiled to Java byte codes on the JVM (this is on the JRuby roadmap), they would get the huge performance boost of Hotspot, which should give them a major advantage over Parrot for some time to come.
    Parrot might become the leading VM for Perl, Python, and Ruby. That combined community would be big, and the mindshare and hardening this confers might mature Parrot's JIT enough to threaten HotSpot. Ie, Parrot might achieve say 90% of HotSpot's performance. That combined with being opensource is why I'm more excited about Parrot as a JVM than Kaffe or GCJ.
  348. is the JVM right for Ruby[ Go to top ]

    Parrot might become the leading VM for Perl, Python, and Ruby.

    It might, but it looks like there is years of work ahead.

    <blockquoye>That combined community would be big, and the mindshare and hardening this confers might mature Parrot's JIT enough to threaten HotSpot. Ie, Parrot might achieve say 90% of HotSpot's performance.
    I simply don't believe this. Hotspot has taken a huge amount of work, and I'm afraid I see no evidence that the 'open source community' has either the inclination or resources to do this work. There are constant claims that Python and Ruby 'are fast enough', so little motivation to go for the speed required for a general purpose language.
    That combined with being opensource is why I'm more excited about Parrot as a JVM than Kaffe or GCJ.

    If it was fast, and it could also run Java, I would get excited, but I simply can't see this ever happening. Maybe in 5 years I will be proved wrong!
  349. more links for ya[ Go to top ]

    assuming this gets implemented in a future JVM, and there's a robust ruby interpreter for the JVM, wouldn't that mean java can accomodate ROR fans and provide an easier way to integrate java components?
    peter

    Absolutely. And here are some more links to innovations coming someday to a virtual machine near you (in this case mostly the CLR, sorry, but we can expect similar stuff for the JVM too):Composable memory transactionsLanguage-integrated queryC-omegaF-sharp, an ML-derived language for the CLRIBM's X10 programming language (targetting the JVM)All these things explain why I get annoyed at the sheer ignorance of people who imagine that a simple-minded little framework for building toy apps in a currently non-scalable language is where the true innovation is. Ha! Happy reading.

    thanks for the links. I've read quite a few of those in the past. I wasn't aware of the Invokedynamic. I hope it gets the JSR stamp and implemented within the next 3 years.

    peter
  350. On Java![ Go to top ]

    The real question we should be asking is this: Why isn't the JVM the best platform for Ruby to be running on, and how do we "fix" it so it is?

    Bingo....

    If you want a silver bullet...why not just develop your apps using the wizards in Microsoft Access....

    When I hear about Ruby (which I'm quite interested in and open to) at things like NFJS, I hear things like...we'll, it's got a runtime too, and it's got GC too...but you can't tell me that as much time has gone into optimization and testing of the Ruby runtime.

    If you ask me, the pressure is there now to move Java -the platform- ahead, thinking about ways to adapt it to new, more dynamic approaches where appropriate.

    There was all the hype about Spring, which, while a lovely framework, was a bit over the top. Now I see Spring as a really nice framwork running inside a J2EE container, and you see EJB3.0 etc inside the app server platform adopting some of the characteristics of Spring, Hibernate, etc.

    Ruby is going to be the same...and I actually see it instead as a nice replacement for Python. I hope Ruby people will not follow in the path of the Python zealots who crow about not having brackets and semi-colons as some sort of revolutionary benefit. Horses for courses, as they say...let's not talk about any technology like it's the Pocket Fisherman.

    Rock on!
    MC
  351. Beyond Java[ Go to top ]

    they were able to recreate the application in Ruby in 4 nights.how long did they worked on the original , 2 nights;-) Now serious, this statement doesn't mean anything. The second time you build something it goes always faster, regardless wherein.

    I can say that my experience is different. I first implemented an SMTP client in Smalltalk and then followed that up with a port to Java. It took just as longer to implement the RFP in Java then in Smalltalk. Also I moved an application from Smalltalk to C++. That move took much longer then the first effort. On the other side of the coin, I replaced 4 man years of C code in 3 days of shell scripting.

    I would attribute each of the slow downs and speedups in delivery time to the expressiveness of the languages being used and their applicability to the problem domain. IOWs, I'd not suggest that all programs should be replace with shell scripts, that is just plain nonsense. However, one should feel free to use another language if the situation calls for it.
  352. Some predictions[ Go to top ]

    Could Ruby and frameworks such as Ruby on Rail be the next big thing?

    I predict a new three-letter acronym will arise in the enterprise software sector before the year 2008 that will be hailed the end-all, save-all Silver Bullet(tm) to cure all ills of the world and problems in enterprise software.

    Large vendors will rush headlessly after said acronym, providing value-added toolsets supporting the acronym that will "require no programming!".

    Now, will anyone dare to put up some money against my wager?
  353. Yes it is what me too sensing[ Go to top ]

    Yes by 2008 there wont be a enterprise developer only assemblers with development knowledge.
    So guys focus on evergreen gaming and Robotics market.
  354. Beyond Java[ Go to top ]

    they were able to recreate the application in Ruby in 4 nights

    One thing I just can't buy into about such a "rumour" is how it completely ignores the most challenging part of building a software application: understanding, modeling, and addressing the problem domain. Last time I tried to solve the same problem twice (ahem, never), I'm sure I'd get it done in four days as well.

    Bruce says that most apps best fit for Java are those requiring "hardened ORM" or "two phase commit" or "heavy threading". Frankly, that is a techy, somewhat shortsighted answer. Most enterprise applications solve problems in complex domains (think about a currency exchange system, a flight planing system, or a intrusion detection system), and integrate a wide range of systems (some legacy, some new) (think about a payment processing engine, or a warehouse management system, or systems to support a supply chain). Developing and testing non-trivial business logic becomes the key problem, as does enterprise integration.

    I personally would prefer to read more books about how to address the issues above rather than how to implement simple CRUD logic productively. Thank goodness for DDD.

    I also don't take to how Bruce blurs the line between what productivity benefits are realized because of Ruby the language and what is because of Rails the framework (the same goes for the Java side of the coin). Much of Rails is directly translatable back to a Java framework, particularly the convention-over-configuration for basic CRUD. The RIFE framework is doing exactly this right now, and you can expect Spring to be offering solutions in this area as well.

    Rails is a good technical solution for isolated (departmental) CRUD webapps. But it does imply an obvious paradigm shock and loss of leverage that will hamper adoption. Furthermore, much of the RoR innovation is (and is being) directly translatable back to today's leading Java frameworks.

    Keith
  355. Thanks for your comments, Keith.[ Go to top ]

    Well reasoned reply as always. If Web Flow is any indication of where you're taking Spring, Java will indeed close the gap. I'll continue to promote Spring where it fits. To your points:

    You should be aware that in both cases we started with a working data model, and we started from a working user interface (though a poor one). Justin blogged about the second and third generations of the app.

    But your points are valid. I do not think that the one month per night is an accurate reflection of Java vs. Ruby productivity. It was enough of a jolt to make me take notice. I do, howerver, think that a whole lot of Java application development today is primarily about web-enabling a big, fat relational database. And Rails is very good at it. If I can stay within the Rails domain, I'm very productive. And I'm finding that a whole lot of Java applications today fit. Not all, but certainly a significant percentage. I say in Beyond Java that this problem set was the base for Java, and frameworks are abandoning that base. Java's too hard and not productive enough.

    You see a blurry line between what's Rails and what's Ruby becasue the line between the two *is* blurry. Rails uses metaprogramming to dynamically add behavior and attributes to classes, and implement domain specific languages (Active Record uses a Ruby DSL for ORM).
  356. Ruby and DSLs[ Go to top ]

    You see a blurry line between what's Rails and what's Ruby becasue the line between the two *is* blurry. Rails uses metaprogramming to dynamically add behavior and attributes to classes, and implement domain specific languages (Active Record uses a Ruby DSL for ORM).

    This is a key point. Ruby's metaprogramming facilities allow it to *feel* like coding in a DSL without taking away all the capabilities of a general purpose language, or introducing new syntax, etc.

    Java should have a way to do the same.
  357. DSL Java[ Go to top ]

    Absolutely agree, Erik. Java needs metaprogramming facilities that make it easily usable as a domain-specific language.
  358. Beyond Java[ Go to top ]

    I can't argue with the points made in this article.

    -Java is moving away from its base. Hardcore enterprise problems may be easier to solve, but the simplest problems are getting harder to solve.
     
    -Java is showing signs of wear, and interesting innovations are beginning to appear outside of Java.

    -It's time to start paying attention again. It's time to look at the horizon, beyond Java.

    Rails seems to be "the next big thing" for a particular class of app. Somewhat reminds me of Appfuse.

    I saw Bruce Tate at a nofluff awhile ago and his points where made very well. Gotta, at least, sit up and listen when people like Bruce and Dave Thomas talk.

    I'm learning alot from Ruby and Rails and they allow me to focus more on the domain, aka DDD.
  359. Behind Java[ Go to top ]

    Could Ruby and frameworks such as Ruby on Rail be the next big thing?
    No.
    The only valid conclusion is that, for people who should probably not have been using J2EE in the first place because they build small CRUD-like web apps deployed on Tomcat with a single CPU, it makes more sense to use ASP.NET or PHP or Python or Ruby. Is anyone surprised ?
    This is actually quite far from saying that the whole Java/J2EE platform will be displaced by an "innovation" like Ruby on Rails. Java will be displaced someday (just like COBOL was... no, wait!) but it will take considerably more than a tiny little Ruby framework. The innovations that Microsoft is working on for the CLR e.g. are much much more important and substantial than Rails: LINQ, C#3.0, the ML- and Haskell-inspired work done my MS Research...
    Also, considering the investment done by the industry to adopt Java/J2EE over the past years, it is extraordinarily naive to imagine that entreprises are going to change course for a small, theoretical gain in development time that would affect only trivial web apps developed by teams of 2-3 and thrown away within 6 months. I don't foresee either any massive retraining of enterprise developers to Ruby. Nope, it's a fine language that has its place but it is foolish to imagine that it is compelling enough to replace an industry heavyweight standard like Java.
    Out of curiosity, is anyone predicting that Common Lisp will displace Java ? Why not ? It's more powerful than any scripting language, it's an ANSI standard, it has industrial-grade commercial implementations, development is amazingly fast... Isn't there any consultant willing to publish a book about Common Lisp, the "next big thing", and bet his career on that ?
  360. Common LISP[ Go to top ]

    Out of curiosity, is anyone predicting that Common Lisp will displace Java ? Why not ? It's more powerful than any scripting language, it's an ANSI standard, it has industrial-grade commercial implementations, development is amazingly fast... Isn't there any consultant willing to publish a book about Common Lisp, the "next big thing", and bet his career on that ?

    Because I don't think anyone can explain Common LISP to the average programmer.

    Maybe we can create a *real* role for the fabled "software architect."

    It's the architect's job to implement the metaprogramming functionality that allows the rest of the team to work more efficiently.

    Thinking that way is hard. Communicating it is even harder. If someone can do it, and do it repeatedly, then they really do deserve a lofty title.
  361. Because I don't think anyone can explain Common LISP to the average programmer.

    Come on Common Lisp is *far* simplier that Java or Ruby ....
    (and even CLOS is far easier than java 5.0)
    and at least closure are not first class citizen in CL.
    And why someone did invent a so complex syntax (Ruby)
    on top of scheme ????


    BTW I still do not understand why people argue
    on Perl vs Python vs Ruby ...
    THEY ARE ALL THE SAME
    (except for some syntactic sugar: dynamic, OO,...)
    and what this big deal concerning
    the differences between Java and Ruby ?
    What is so innovative in Ruby ... (I mean by 200x stnadards and not back in 80's ...) ...
    humm
    (browsing the doc ....)
    (the syntax which seems even worst than perl)
    Bingo ... Ruby do have closure ....
    humm that's all
    and Java is stronger typed ... fair enough ....

    So I bet that they used a lot of closures in
    Beyond Java :P

    Think differently ...
  362. Common LISP[ Go to top ]

    Because I don't think anyone can explain Common LISP to the average programmer.Maybe we can create a *real* role for the fabled "software architect."It's the architect's job to implement the metaprogramming functionality that allows the rest of the team to work more efficiently.Thinking that way is hard. Communicating it is even harder. If someone can do it, and do it repeatedly, then they really do deserve a lofty title.
    A programmer who cannot grok Common Lisp will not grok meta-programming constructs or closures in Ruby either I'm afraid. You are right in saying that the meta-programming facilities can be used by technical leads for creating frameworks and domain-specific languages that abstract away some of that complexity for the more junior or less skilled developers.
  363. Behind Java[ Go to top ]

    Indeed, RoR does not even do a good and usable CRUD simple app, unless you start writing real non-modeling codes again.

    I'm amazed on development conference can even be organized for such a framework.
  364. Common Lisp[ Go to top ]

    Could Ruby and frameworks such as Ruby on Rail be the next big thing?
    The innovations that Microsoft is working on for the CLR e.g. are much much more important and substantial than Rails: LINQ, C#3.0, the ML- and Haskell-inspired work done my MS Research...
    <br><br>
    I completely agree.
    <br><br>
    Out of curiosity, is anyone predicting that Common Lisp will displace Java ? Why not ? It's more powerful than any scripting language, it's an ANSI standard, it has industrial-grade commercial implementations, development is amazingly fast... Isn't there any consultant willing to publish a book about Common Lisp, the "next big thing", and bet his career on that ?
    <br><br>
    Yes, Common Lisp is ANSI Common Lisp, the language itself evolved along last 50 years, now having high quality open source and commercial development environments.
    <br><br>
    There are new books published about Common Lisp:
    <br><br>
    - Practical Common Lisp
    http://gigamonkeys.com/book/
    <br><br>
    - Successful Lisp:
    How to Understand and Use Common Lisp
    http://psg.com/~dlamkins/sl/cover.html
    <br><br>
    As for web programming, check out the UnCommon Web framework.
    <br><br>
    People started to talk about Lisp Renaissance, there are enough signs to justify that.
  365. I don't get it[ Go to top ]

    I don't understand why people talk about Ruby On Rails as a replacement for java. It's like saying "an apple is a replacement for food."

    Ruby on Rails is for writing web applications, right? So while Ruby on Rails is a possibly better alternative to one of Java's many web frameworks - FOR WEB APPLICATIONS - I don't see how Ruby on Rails is in itself cpompetitive with Java.

    Same goes for the whole inane argument that "php is better than java". Can I write a compiler in PHP? WTF?

    What we do need to do, as a community is listen to what developers want in a language. In my opinion, Java 1.5 brought numerous not-so-good features to Java, at a very high development cost.
  366. Groovy...[ Go to top ]

    Groovy can do lots of the things that Ruby and other scripting langs can do. It is still rough, and I think they need to define better how it integrates with Java, but it can use java classes and the java API, and adds tons of neat features.

    SInce it's JSR'd, it should rapidly develop into a formal scripting language for java-land, but I imagine its close association with Java may hamper its toolset and infrastructure, relative to Ruby which has full freedom to start anew.
  367. Groovy...[ Go to top ]

    ... SInce it's JSR'd, it should rapidly develop into a formal scripting language for java-land, but I imagine its close association with Java may hamper its toolset and infrastructure, relative to Ruby which has full freedom to start anew.

    I was in love with the idea of Groovy - about 2+ years ago when it seemed to have some promise. Talk about an extremely good idea left to rot out in no-man's land. Where is the community interest, books, injection into frameworks, tool support etc? The last serverside article I could find that directly was about groovy evoked a paltry 10 responses with 3 from the author.

    And so what if it is a JSR? I'm sure the Java vineyard is riddled with JSRs that died on the vine. Not that I'm whining.
  368. Beyond Java[ Go to top ]

    Bruce I think you should take a look of http://jboss.com/products/seam
    i think that and EJB3 are the future of java.
  369. Service applications[ Go to top ]

    I haven't used ROR, but I have explored Ruby in the past as a potential platform for writing a RETE rules engine. The last time I worked on web UI was a few years back, but I'm curious to hear people's experience building service applications using ROR.

    Back when I worked on web UI, it was service applications that had to support internationalization and co-branding. By co-branding, I mean services that can use one or more UI templates with a common business layer. Think of a portal that provides a variety of applications/features as discrete modules. From customer to customer, the UI requirements will change drastically and the data has to move between systems in a relatively seamless manner.

    When data changes, it's not just 1 database. It's several service calls between 4-8 systems. Has anyone attempted to build this type of service using ROR?

    I'm just curious.

    peter
  370. Refactoring?[ Go to top ]

    Please tell me how you can refactor in untyped languages. Just go down the list of IntelliJ's refactoring menu and describe to me how it is even possible to do it without a type system.
  371. Refactoring?[ Go to top ]

    Please tell me how you can refactor in untyped languages. Just go down the list of IntelliJ's refactoring menu and describe to me how it is even possible to do it without a type system.

    I'm not sure what you mean by untyped (this and the term 'dynamically typed' still confuse me, after decades). However, take a look at any good Smalltalk implementation. Smalltalk is untyped (or dynamically typed) in the same way that Ruby is, but has a huge amount of refactoring tools. In fact some of the first uses of the term 'refactoring' were regarding Smalltalk.
  372. Refactoring?[ Go to top ]

    Please tell me how you can refactor in untyped languages. Just go down the list of IntelliJ's refactoring menu and describe to me how it is even possible to do it without a type system.
    I'm not sure what you mean by untyped (this and the term 'dynamically typed' still confuse me, after decades). However, take a look at any good Smalltalk implementation. Smalltalk is untyped (or dynamically typed) in the same way that Ruby is, but has a huge amount of refactoring tools. In fact some of the first uses of the term 'refactoring' were regarding Smalltalk.

    So, if you have dynamic typing, how can you do something as simple as find all usages of a specific class? Or how about renaming a method/field in your entire code base? Those are the simple things. What about more complex things like moving methods around between classes and being automatically warned where things will get messed up?

    I don't know smalltalk, but have used tcl, perl, python, php. Please, i'm open minded, elaborate.

    thanks, bill
  373. Refactoring?[ Go to top ]

    ...So, if you have dynamic typing, how can you do something as simple as find all usages of a specific class? Or how about renaming a method/field in your entire code base? Those are the simple things. What about more complex things like moving methods around between classes and being automatically warned where things will get messed up?I don't know smalltalk, but have used tcl, perl, python, php. Please, i'm open minded, elaborate.thanks, bill

    I've used smalltalk commercially, and the browsers supported finding all possible usages of the class etc. You tend to look at all possible examples of the message send "foo:" which tends to give you sometimes more than you want.

    On the refactoring front, however, you may be surprised to learn that the early refactoring work was actually done with Smalltalk. In particular, Will Opdyke's thesis introduced a refactoring smalltalk browser. Refactoring is about preserving invariants.

    Cheers,

    Andrew
  374. Refactoring?[ Go to top ]

    Yes.. the Smalltalk people INVENTED refactoring.. and also XP and Design Patterns and other cool stuff.

    Smalltalk is an environment not just a language. A VM to the extreme. At all times you have an active runtime. Since the code is so elegant and relatively uniform to use (no messy keywords to confuse) it is easy to write code that analyzes your code immediately. In fact, whenever you made a change to a method, it wouldn't even let you leave the method if you made a syntax error or referred to something non-existant.
  375. Refactoring?[ Go to top ]

    In fact, whenever you made a change to a method, it wouldn't even let you leave the method if you made a syntax error or referred to something non-existant.
    One of the many reasons why Smalltalk drove so many people nuts...

    --
    Cedric
  376. Refactoring?[ Go to top ]

    In fact, whenever you made a change to a method, it wouldn't even let you leave the method if you made a syntax error or referred to something non-existant.
    One of the many reasons why Smalltalk drove so many people nuts...-- Cedric

    I'm sorry but this statement is just plain silly. The IDE warned you that there were unresolved references and you could choose to ignore the warning. This is no more annoying then having Eclipse etc leave a red mark in the margin if it finds a problem with your code.

    Kirk
  377. Refactoring?[ Go to top ]

    In fact, whenever you made a change to a method, it wouldn't even let you leave the method if you made a syntax error or referred to something non-existant.
    One of the many reasons why Smalltalk drove so many people nuts...-- Cedric
    I'm sorry but this statement is just plain silly. The IDE warned you that there were unresolved references and you could choose to ignore the warning. This is no more annoying then having Eclipse etc leave a red mark in the margin if it finds a problem with your code.Kirk
    It is *way* more annoying if it won't let you leave the method until the error has been corrected, which is my recollection of Smalltalk IDE's as well (I sure hope this has changed now).

    Interestingly, this is exactly how everybody using an IDE programs these days: write code with errors (tests before the implementation or invoking code that hasn't been written yet) and use "quick fixes" to let the IDE drive you and generate the scaffolding as you go.

    Try a modern IDE, Kirk, it will be enlightening to you and maybe you'll finally get over the fact that Smalltalk didn't take over the world (you might even begin to understand why :-)).

    --
    Cedric
  378. Refactoring?[ Go to top ]

    It is *way* more annoying if it won't let you leave the method until the error has been corrected, which is my recollection of Smalltalk IDE's as well (I sure hope this has changed now).

    The way I remember Smalltalk IDEs from a decade or more ago was that the IDE was part of the system that you were free to change. If you didn't like the behaviour, you could change it (and many did).
    Interestingly, this is exactly how everybody using an IDE programs these days: write code with errors (tests before the implementation or invoking code that hasn't been written yet) and use "quick fixes" to let the IDE drive you and generate the scaffolding as you go.

    I remember Smalltalk IDEs that would suggest corrections for errors, and with a wide range of code completions and 'scaffolds'.
    Try a modern IDE, Kirk, it will be enlightening to you and maybe you'll finally get over the fact that Smalltalk didn't take over the world (you might even begin to understand why :-)).-- Cedric

    I really don't think the IDE was a factor. Most language IDEs don't (in my experience) come anywhere close to what Smalltalk IDEs could do with the object inspectors and 'evaluate it/inspect it/print it' options available on every control where text or source code was visible. The only Java system that has come close was VisualAge Java, which was (I think) based on some sort of Java/Smalltalk hybrid system.

    Most modern development systems still haven't grasped what was good about Smalltalk development: Errors and Exceptions didn't halt an application - they simply opened a breakpoint window, where code and data could be inspected and modified. The method that was interrupted could be changed, re-compiled and re-started. If there is one thing I could change about Java development, it would be for Exceptions to act like Smalltalk, and be resumable from.

    But anyway, I think the reason that Smalltalk didn't take over the world was because the quality cross-platform versions ended up with mostly restrictive, awkward and expensive licensing. Just as this happened, Java turned up... which was object-oriented, free and cross-platform.
  379. Refactoring?[ Go to top ]

    Most language IDEs don't (in my experience) come anywhere close to what Smalltalk IDEs could do with the object inspectors and 'evaluate it/inspect it/print it' options available on every control where text or source code was visible.
    Can you name one feature that Smaltalk IDE's had that cannot be done with today's Java IDE's?

    I can't.

    Not only that, but Java IDE's go further than the Smalltalk IDE's ever went.

    There's nothing wrong with that, by the way: we're comparing tools that are more than a decade apart, so it's not very fair, and I certainly don't mean to downplay the revolutionary aspect of the Smalltalk IDE back in its time.

    But come on... it's not even close to what we can do today in Java.
    Most modern development systems still haven't grasped what was good about Smalltalk development: Errors and Exceptions didn't halt an application - they simply opened a breakpoint window
    Again: trivial to do in any IDE today.
    But anyway, I think the reason that Smalltalk didn't take over the world was because the quality cross-platform versions ended up with mostly restrictive, awkward and expensive licensing. Just as this happened, Java turned up... which was object-oriented, free and cross-platform.
    These factors certainly contributed, but don't underestimate the impact that Smalltalk's syntax had on potential users. Many were completely turned off by it (and still are -- even now, when our minds are more open).

    Similarly, the fact that Java's syntax was an evolution of the C and C++ syntax certainly contributed to its wide adoption.

    --
    Cedric
  380. Refactoring?[ Go to top ]

    Most language IDEs don't (in my experience) come anywhere close to what Smalltalk IDEs could do with the object inspectors and 'evaluate it/inspect it/print it' options available on every control where text or source code was visible.
    Can you name one feature that Smaltalk IDE's had that cannot be done with today's Java IDE's?I can't.

    I can, but you are probably going to (reasonably) say that this isn't fair. It is the 'save image' function. The ability to save all object states, and then restore them for inspection/debugging at a later date.
    it's not even close to what we can do today in Java.

    I disagree. See below.
    Most modern development systems still haven't grasped what was good about Smalltalk development: Errors and Exceptions didn't halt an application - they simply opened a breakpoint window
    Again: trivial to do in any IDE today.

    No, it really isn't. Note what I said - that application was not halted, by any exception. The source code of the appropriate method was available for recompilation and restarting - from any exception. You could fix the problem then resume the live application as if nothing had happened. I may be missing something, but I have not seen this in any IDE today.
    But anyway, I think the reason that Smalltalk didn't take over the world was because the quality cross-platform versions ended up with mostly restrictive, awkward and expensive licensing. Just as this happened, Java turned up... which was object-oriented, free and cross-platform.
    These factors certainly contributed, but don't underestimate the impact that Smalltalk's syntax had on potential users. Many were completely turned off by it (and still are -- even now, when our minds are more open).Similarly, the fact that Java's syntax was an evolution of the C and C++ syntax certainly contributed to its wide adoption.-- Cedric

    This is a fair comment, but I don't think it is entirely true. In the 1980s and early 90s, Digitalk Smalltalk/V seemed to be highly successful. I used the DOS/286 and Win16 version a lot. It was such a successful product that Bill Gates himself said: "Smalltalk/V PM is the kind of tool that will make OS/2 the successor to MS/DOS". We may well laugh now (and many do!), but it was an indication of the high profile of the language. Syntax did not seem to be a factor (after all, Smalltalk was used for substantial projects). Poor marketing and commercial buy-outs seemed to be the factor that destroyed things, putting the Smalltalk beyond the pockets of the average developer for a while.

    My impression is that so many people were put off by the debacle of Smalltalk that they were looking for something more familiar that was less of a risk. The C/C++ syntax of Java made it seem hugely less risky.
  381. Refactoring?[ Go to top ]

    Can you name one feature that Smaltalk IDE's had that cannot be done with today's Java IDE's?I can't.

    Well I can. Inspect, save and change a varibles state in an exection envrironment. Java IDEs can change some code on the fly but in Smalltalk you had much more capabilities in this regard. This made debugging in Smalltalk easier more then 10 years ago then debugging Java today.

    Integration with source code control was seamless. There is no Java IDE that can claim the level of integration that Smalltalk had with Envy. Shall I go on????

    But anyway, I think the reason that Smalltalk didn't take over the world was because the quality cross-platform versions ended up with mostly restrictive, awkward and expensive licensing.

    I don't think I can point to one factor that contributed to Smalltalk's fall from grace. There was a lot of uncertianty in the Smalltalk market place in 96/97. Digitalk had been bought by ParcPlace, ParcPlace froze the market on themselves by promising features that they'd not delivered upon. ParcPlace it's self was in a mess. All of this made business very nervious. This wasn't helped by licensing fees. It wasn't helped by developers not understanding the syntax and the environment and the preception that Smalltalk was slow. All of this happened just as Java was coming on the scene and this is only the tip of the iceberg.

    My impression is that so many people were put off by the debacle of Smalltalk that they were looking for something more familiar that was less of a risk.

    What debacle? Yes people were looking for something more familiar but I'd not call that a debacle

    Kirk
  382. Refactoring?[ Go to top ]

    What debacle? Yes people were looking for something more familiar but I'd not call that a debacle Kirk

    Indeed. I don't remember Smalltalk being taken up enough to cause any sort of debacle. I don't think people were looking for something more familiar because of any effect of Smalltalk. I think it was the other way around - Smalltalk was not widely adopted because people wanted something more familiar.
  383. Refactoring?[ Go to top ]

    Well I can. Inspect, save and change a varibles state in an exection envrironment. Java IDEs can change some code on the fly but in Smalltalk you had much more capabilities in this regard. This made debugging in Smalltalk easier more then 10 years ago then debugging Java today.Integration with source code control was seamless. There is no Java IDE that can claim the level of integration that Smalltalk had with Envy. Shall I go on????
    Yes please, because these two features are available in Java IDE's today and are much more sophisticated than Smalltalk ever provided (surely we're not comparing source control systems of today with what was done fifteen years ago?!?).
    I don't think I can point to one factor that contributed to Smalltalk's fall from grace.
    You seem to assume that Smalltalk reached a state of grace at some point :-)
    All of this happened just as Java was coming on the scene and this is only the tip of the iceberg.
    C++ was a much more real and more effective threat to Smalltalk than Java ever was. By the time Java became popular, Smalltalk had already become a niche language getting more marginalized by the day.

    --
    Cedric
  384. Refactoring?[ Go to top ]

    Yes please, because these two features are available in Java IDE's today and are much more sophisticated than Smalltalk ever p