Home

News: EJB 3.0 Announcement at TheServerSide Symposium

  1. By far the biggest news of TheServerSide Symposium was the EJB 3.0 coming out party. People have been alluding to this for awhile, and today we found out, that EJB 3.0 is moving to a lightweight programming model.

    No more home interfaces, deployment descriptors, SessionBean interfaces. Welcome annotations. There has been a mixed reaction to the news.

    This was an early look into the JSR, and it is of course subject to change... but the programming model is set.

    Your new session beans will look like:
    @Session public class CalculatorBean {
     public int add(int a, int b) {
      return a + b;
     }

     public int subtract(int a, int b) {
      return a - b;
     }
    }
    This will use all of the defaults.... (for what we used to put in the deployment descriptors). A Calculator interface will be created.

    JNDI is no longer going to be THE way to get at resources. Dependency Injection will be supported in a variety of ways such as:
    @Session public class MyBean {
      private DataSource customerDB;

      @Inject private void setCustomerDB(DataSource customerDB) {
      this.customerDB = customerDB;
    }
      
      public void foo() {
       ...
       Connection c = customerDB.getConnection();
       ...
      }
    }
    As I said, there has been a mixed review, and rather than censor... here are some blog entries on the topic:

    Cedric: EJB 3.0 officially announced

    Jason: TSSS: EJB 3.0 Work in progress

    Debu: EJB 3.0 – Looks simply great!

    Cameron: EJB 3.0

    You can find many more on TheServerSide Symposium Wiki

    Threaded Messages (128)

  2. Third time lucky?[ Go to top ]

    You have to worry about a spec that is still trying to get it right on the third attempt.

    Personally I'll be looking to use Hibernate or JDO where I want transparent object-relational persistence and plain DAO/JDBC where I want more control over persistence.

    I think EJB 3.0 is in danger of alienating the existing customer base and confusing new users. IMHO, it would have been better to embrace JDO as an alternative persistence model within J2EE.

    Andy Grove
    Code Futures, Ltd
  3. My own perspective is well known, but for completeness I'm a member of the JDO expert group.

    When the completed JSR-220 (EJB 3.0) specification finally comes before the Executive Committee for adoption in a year or two's (or three's) time, the votes of Oracle, IBM and BEA can be assured.

    It will be up to the remaining 13 members of the executive committee to decide between them whether it is in the Java community's interests to adopt a new standard which seeks to further entrench the positions of the present application server vendors by:

    1. Supporting in practice only a relational model of persistence, despite the wide variety of datastore paradigms in use in the enterprise, by essentially insisting that a JDBC-driver be used.

    2. Raising the barriers to new entrants in that space, by insisting on backward compatibility of Etity Beans 2.0 / 2.1 whilst at the same time admitting that they have always been flawed and all-but replacing CMP/CMR with POJO-based transparent persistence.

    I'm pleased that EJB 3.0, when it finally arives, will advocate transparent persistence. I regret the manner in which this is being achieved; instead of adopting a very credible standard behind which many vendors (including Hibernate) can compete, they are evidently inventing something "new" which, coincidentally, will already be implemented by a single product in the marketplace.

    It would be better for the marketplace if EJB 3.0 deprecated Entity Beans and said nothing about persistence, allowing freedom of choice without undue influence.

    When the time comes, I hope that the 13 members of the Executive Committee have the guts.
  4. Robin, it might be a surprise to you, but there is no other model for persistent data than the relational model. If you think otherwise, prove it.
  5. Robin, it might be a surprise to you, but there is no other model for persistent data than the relational model.
    If I may chime in: there are some things that could qualify as models of "persistent" data - for instance "plain old" filesystems or XML. I have my reservations, because filesystems usually lack a formal model behind them and XML is just a mess (Infoset being defined as an afterthough, and it shows).

    But persistence is just a small pice of the data management puzzle. It is esential , but orthogonal to the model itself. And as you most likely want a usefull model of data which addresses structure, integrity and manipulation, then the relational model is really the only game in town.
  6. Data models and standards[ Go to top ]

    Thanks for this clarification and your correct definition of a data model. So, by this definition, hierarchical and network concepts are not as viable (and probably never will be) in "data models", as they don't define a clear normalized data structure (removing redundancy), guarantee integrity and safe data, and have well-defined operations to manipulate/query data.

    No object-oriented "data model" provides that and every attempt so far was a mess that added significant overhead, but no advantages to the relational model. This is the reason why object "databases" never had success. And this is a good thing! OODBs are not really databases, as you can't even share your data anymore if you have a fixed network structure where arbitrary joins and navigation is no longer possible. You may call it a "persistent data store", but not a database. There are many things wrong with "relational" databases management systems, but not with the relational data model and the basic concepts. Fix the DBMS! Call Oracle!

    Now, back to Java. What we need is a high-quality, work-minimizing and "as good as possible" integration of this object-oriented environment with relational systems. I've seen JDO2 and I don't think that "standardize the way to get a SQL connection" is going to cut it. Adding "aggregation to the query language" is also not good enough. JDO is still an API standard for object "databases", which are, as explained above, not relevant. I admit that there are niches in the enterprise that might use an object data store (standalone systems with non-business data, for example), but for every sensible application of OODBMS, there are thousands for RDBMS. We are talking about a 1% probability here, and we should better not start discussing the XML stuff. No one needs it.

    I know that Robin (are you the official spokesperson for JDO2 now? I have some more interesting questions for you...) will now reply and point out that all of his 20 vendors (or was it users?) have JDO ORM tools, not object databases. So, why is "getting a SQL connection" still the only relational feature in that standard? You copied the Hibernate feature list last your for your kickoff meeting, but forgot half of the really interesting stuff.
  7. data models[ Go to top ]

    aah, well said (the model part). I'll keep that for later reference.

    christian
  8. Data models and standards[ Go to top ]

    I've seen JDO2 and I don't think that "standardize the way to get a SQL connection" is going to cut it. Adding "aggregation to the query language" is also not good enough. JDO is still an API standard for object "databases", which are, as explained above, not relevant.
    I'll give you the benefit of the doubt, and assume you are simply misinformed. Otherwise I would have to accuse you of having much more sinister intentions.

    Rather than explain to you why many users like to persist to alternative non-relational stores like ODBs, LDAP systems, legacy systems, and others, I'll give you a challenge:
    Show me a single planned feature of the persistence part of EJB 3 that JDO 2 does not address (or for that matter, current JDO 1.0.1 O/R vendors... but let's stick to JDO 2 for a spec-vs-spec comparison rather than get into product details). That's all you have to do. For all this talk of JDO not being "good enough" and being too ODB-centric to handle relational data properly, I'm willing to bet that I can show you exactly how to mirror any persistence-related aspect of EJB 3 in JDO 2, a spec that will be out significantly sooner.
  9. Data models and standards[ Go to top ]

    Automatic mapping of the result of a cusom SQL query to an object graph, with standardized API calls to execute the SQL and retrieve the objects.
  10. Data models and standards[ Go to top ]

    Automatic mapping of the result of a cusom SQL query to an object graph, with standardized API calls to execute the SQL and retrieve the objects.
    Query q = pm.newQuery ("javax.jdo.query.SQL");
    q.setClass (Person.class);
    q.setFilter ("select * from person where firstName = 'Fred');
    Collection results = (Collection) q.execute ();

    Each element of the result collection will be a Person object. And obviously there are also ways to do parameters, etc. But that's a simple example.
  11. Data models and standards[ Go to top ]

    q.setFilter ("select * from person where firstName = 'Fred');
    Just to make it clear that is actually 'pure' SQL, I probably should have used a slightly better example:
    q.setFilter ("SELECT * FROM PERSON_TABLE WHERE FIRSTNAME = 'Fred'");
    And you don't have to select '*'; any columns will do so long as the primary key and class discriminator cols (if any) are included.
  12. Data models and standards[ Go to top ]

    What happens if you "select a.*, b.* from Person a, Department b where ..."?

    Standardized API for pluggable custom mapping types.

    Standardization of "transitive persistence" with fine-grained cascading options to replace the awkward persistence by reachability concept which is not usable with relational databases.

    Runtime fetching of associations in the query language.

    ...
  13. Data models and standards[ Go to top ]

    What happens if you "select a.*, b.* from Person a, Department b where ..."?
    Hasn't been decided yet. Projections are certainly supported in standard JDOQL queries, but not sure about SQL ones. Then again, I can guarantee it hasn't been set in stone yet for EJB 3 either. The above wouldn't work in Hibernate either, BTW. Not without modification (and even then you'd run into Hibernate's problems with not uniquing query results).
    Standardized API for pluggable custom mapping types.
    Won't happen, and won't happen in EJB 3. At least, not an efficient standardized API. To have an efficient API, you need to be able to inject SQL into the same SQL statement being used to insert/update other fields in the row and to plug into things like SQL statement batching and foreign key constraint ordering that involve the internals of the container. Maybe EJB 3 will allow you to take over the CRUD operations via separate SQL statements to update custom fields. But that's not efficient. And more to the point, it's not of much value for something so rare to be standardized.

    If complete standardization of how to map a custom type makes it into the final EJB 3 spec, I'll certainly eat my words. But even if that does happen (and it seems highly unlikely that the container vendors would allow themselves to be standardized to the point of marginalization), JDO will by that time have had plenty of time to listen to users and create a standard plugin API if one is warranted.
       
    Standardization of "transitive persistence" with fine-grained cascading options to replace the awkward persistence by reachability concept which is not usable with relational databases.Runtime fetching of associations in the query language....
    You must must have been misinformed again. I have yet to see a single JDO user -- and I've dealt with some, shall we say, less-than-brilliant users! -- have a problem with persistence by reachability. In fact, it is a significant boon to development.
  14. Data models and standards[ Go to top ]

    How do you delete objects with persistence by reachability?
  15. Data models and standards[ Go to top ]

    Christian,

    I realize now that my challenge may have been misguided. It is clear from your original counter-challenge of constructing a SQL query that, contrary to your assertion that you "have seen JDO 2", you really don't know what's in it. Moreover, because JDO 2 is very close to reality, it is realistic in its goals, whereas right now EJB 3 is so far away that you can pretty much say any feature is "planned"... the chopping block won't come into play until later.

    Suffice it to say that when you have to start getting into things like standardized custom container plugins for unsupported field types, or insisting that persistence by reachability doesn't work for relational DBs (I think all current JDO users and many Hibernate users who take advantage of the 'save-update' cascade option would be shocked to hear this), you're scraping the bottom of the barrel. It shows that JDO 2 does in fact handle relational persistence as well as anything else out there, or in this case even something that won't see the light of day for quite some time (EJB 3). The fact that JDO can handle other back ends as well is gravy.

    And seeing as it's now 6 in the morning here, I think I'll get some shut eye. Take it easy.
  16. Data models and standards[ Go to top ]

    You realize that "cascading" is not the JDO persistence by reachability model, but a much more flexible approach that actually works without Lifecycle interfaces or manual deletes. My question was if this is still the JDO(2) concept. By the way, what happened to that idea of having two XML files per persistent class?
  17. Data models and standards[ Go to top ]

    Hi Christian
    By the way, what happened to that idea of having two XML files per persistent class?
    You can have one file for your whole model, one file per package or one file per class or a mix. You can also choose to separate out the O/R mapping information into separate files if you wish. You can use this to have different mappings for different databases (e.g. myapp.jdo, myapp.jdor-oracle, myapp.jdor-mysql) with the correct mapping file chosen at runtime.

    Cheers
    David
  18. Hi Christian

    JDO has persistence-by-reachability.

    It does not provide deletion-by-unreachability, which would be pretty dangerous. Anyway, all persistent objects are inherently reachable by Query even if they are otherwise unrefferenced.

    JDO 2.0 provides for deletion cascade. Most implementations will have this delegated to the underlying triggers of the database as this is most efficient. In fact, JDO 1.0 vendors have supported this for quite a while now.

    Kind regards, Robin.
  19. EJB 2.x bankrupt statement[ Go to top ]

    Hi,

    i must say this spec draft looks indeed good, but on the other hand, with so much changes in EJB3, its like the expert group saying "We totally sucked at EJB 2.x and now we are trying better." (i know that annotations are not there at that time, but i am speaking more of the Entitiy Beans area and how they are handled)

    I am still not convinced to leave JDO, but EJB3 makes the decission harder at least for J2EE. I am still dreaming of some ORM standard (JDO) that spans J2SE AND J2EE.
  20. IMHO big j2ee vendors don't have any choice now but to jump into the lightweight framework bandwagon, and EJB3 will be their last chance. Having such good OS projects like Hibernate, Spring, Tapestry, etc. out there taking over new developments in enterprise systems, all they can do is to follow OS movement, or be left behind. There are still many companies afraid of jumping into OS, so having big names behind lightweight frameworks could help, and this would be possible with EJB3.

    Besides, it is time that deployment descriptors and annotations should be standardized between LWFs like Spring, Pico, and others, in order to make component portability easier between them.

    BTW, I igree that EJB3 should include persistence services, but leave the choice of how to do that to the developer, instead of mandating any specific persistence standard. Define what is missing, and aggregate what already exists and is working, don't reinvent the wheel, and mainly: KISS.

    All in all, this will just help improve java, foster j2ee market and fight .Net.

    Regards,
    Henrique Steckelberg
  21. <Henrique>
    ...
    such good OS projects like Hibernate, Spring, Tapestry, etc. out there taking over new developments in enterprise systems,
    ...
    </Henrique>

    or just use integrated Enhydra! With Enhydra you have:
    - Presentation layer (PO) technology supporting all kind of XML (WML, HTML, cHTML, VoiceXML, ...).
    - Business layer as POJO (BO) with data layer (DODS) supporting OR/Mapper with easy to use user interface!
    - For development you have Kelp Plug-Ins (Eclipse, NetBeans, JBuilder, etc.).
    - For production environment you have Enhydra Director (easy to install load balancer with Apache - per setup). And each Enhydra instances can be easily installed as a Windows Service or Linux daemon.

    Everything is in one box and with one installation setup!

    You'll wonder how easy to use Enhydra, just as easy as using PHP - but with object-oriented power. Enhydra is Open Source as well ;-) If you like KISS, you'll like Enhydra ;-) Check it out at: http://www.enhydra.org

    Cheers,
    Lofi.
  22. Data models and standards[ Go to top ]

    EJBs are not just about persistence. It is about creating distributed components. With EJBs I can expose that component as a CORBA call, RMI, or WebService. I think alot of this conflict between EJBs and JDOs are comparing apples and oranges. Container managed persistence of entity beans may not be the most elegant way of persisting data but it does so in a spec that allows vendors to compete on the implementation.

    For distributed components that persists data I want method level transactions, security, caching, clustering ....
    I develop components to be reused throughout the enterprise so I want control at the back end not the front.

    I'm not saying that JDO doesn't have its place. It probably should be in the J2SE but it is a bad idea to kill EJBs or MDEJBs. Don't cut your arm off just because you have a scratch to itch.

    weo
  23. Data models and standards[ Go to top ]

    I'm not saying that JDO doesn't have its place. It probably should be in the J2SE but it is a bad idea to kill EJBs or MDEJBs. Don't cut your arm off just because you have a scratch to itch.
    Somehow people perceive that I and other JDO advocates are inherently "anti-EJB". For remoteness, CMT and method-level permissions EJB 2.0 does a reasonable job, and EJB 3.0 - in part through its leverage of developments in the "lightweight container" camp - will probably do a better job. Do you need those capabilities for persistent data itself? No. You need them for services.
  24. Umm, check this out[ Go to top ]

    http://www.intersystems.com/cache/index.html

    Single native object (over sparse arrays) internal model, with "projections" to what ever is convenient. SQL Tables for existing tools (reporting,etc.) w/o, Objects (POJO,EJB,ActiveX,etc) for development.
  25. "Robin, it might be a surprise to you, but there is no other model for persistent data than the relational model."

    Don't forget OODBMS and ISAM products as well...
  26. Robin, it might be a surprise to you, but there is no other model for persistent data than the relational model. If you think otherwise, prove it.
    Firstly, this is missing the point. What is needed is to persist objects, not have a data model for its own sake.

    Secondly, graph oriented data models (e.g. SDO) are data models too, and there is no particular trick to making such a data model persistent.

    Do you like object orientation, or are you more attracted to the mathematical properties of the relational data model? If the mathematical properties of the relational data model attract you, do mathematical properties in programming also attract you and if so why aren't you programming in non-imperative languages?
  27. I'm attracted by sound data management, not snake oil selling.
  28. Christian, Robin,

    At this point, your growing public animosity toward each other is not going to do your preferred object persistence implementation any good. Having seen many working and effective Entity EJB-, Hibernate- and JDO-based applications, I think it's beyond the point of trying to pretend that "the other implementations" can't work.

    Why don't you try to explain the situations in which each of these excels, and why we would pick one or another? And feel free to explain the various strengths and weaknesses of the APIs / implementations, but please avoid the insults.

    Peace,

    Cameron Purdy
    Tangosol, Inc.
    Coherence: Clustered JCache for Grid Computing!
  29. Hi Cameron

    I don't believe Christian or I have traded any insults with each other (on this thread or elsewhere), but as this has evidently been your perception I most certainly appreciate your comments.

    Kind regards, Robin.
  30. I'm attracted by sound data management, not snake oil selling.
    Object Orientation is snake oil? Are you comfortable with transparent persistence? Do you realize that the EJB 3.0 plans include transparent persistence? The EJB 3.0 people seem to be avoiding the term, but transparent persistence is what they are planning. I suppose using the term would be awkward given that there was already an approved JSR for transparent persistence.
  31. missing the point[ Go to top ]

    Firstly, this is missing the point. What is needed is to persist objects, not have a data model for its own sake.
    This is just completely wrong. The data model lasts much longer than the application. "Persisting objects" is at most /half/ the problem.

    Really, this attitude is exactly why object persistence technologies have never really gone mainstream until now. The object persistence crowd has always seen ORM as a poor substitute for flawed, failed OODBMS technology. The advent of the EJB3 spec heralds, among other thing, the mainstream acceptance of object/relational mapping. Those of us who have worked hard to make ORM technology work - including us at Hibernate, and the TopLink guys at Oracle (who were there way, way before us) - are celebrating!
  32. missing the point[ Go to top ]

    Firstly, this is missing the point. What is needed is to persist objects, not have a data model for its own sake.
    This is just completely wrong. The data model lasts much longer than the application. "Persisting objects" is at most /half/ the problem.Really, this attitude is exactly why object persistence technologies have never really gone mainstream until now. The object persistence crowd has always seen ORM as a poor substitute for flawed, failed OODBMS technology. The advent of the EJB3 spec heralds, among other thing, the mainstream acceptance of object/relational mapping. Those of us who have worked hard to make ORM technology work - including us at Hibernate, and the TopLink guys at Oracle (who were there way, way before us) - are celebrating!
    I'll just say that your comments are just spot-on. The whole mindset that object persistence was the real problem has been the major stumbling block for the industry. The only nitpick I have is that I think "at most /half/" still overstates the importance of "persisting objects". I have to give alot of credit to the TopLink guys for being lone voices of sanity for many, many years.

    I remember the EJB 2.0 expert group grappling with this issue -- and JDO in particular. Unfortunately, we worked in the context of too many constraints and not enough forward vision -- or platform capabilities -- at the time. What we came out with was kind of a mess, and I've always been surprised that EJB 2.0 got the traction it did. It took a long time coming, but looks like EJB is finally going to get it right.

    Greg
  33. Firstly, this is missing the point. What is needed is to persist objects, not have a data model for its own sake.
    This is just completely wrong. The data model lasts much longer than the application. "Persisting objects" is at most /half/ the problem.Really, this attitude is exactly why object persistence technologies have never really gone mainstream until now. The object persistence crowd has always seen ORM as a poor substitute for flawed, failed OODBMS technology. The advent of the EJB3 spec heralds, among other thing, the mainstream acceptance of object/relational mapping. Those of us who have worked hard to make ORM technology work - including us at Hibernate, and the TopLink guys at Oracle (who were there way, way before us) - are celebrating!
    Entity beans were in the perfect position to make object persistence technologies mainstream. They failed, not because "the object persistence crowd has always seen ORM as a poor substitute for flawed, failed OODBMS technology", but because they didn't solve the problems people wanted solved, in the areas where they could have provided some benefit, they were just too painful to work with to justify their overhead, and finally they were tied to heavy application servers that were overkill for most projects.

    For ORM to really go mainstream it must come out of the container, since most projects neither need nor want the container, however shattering that fact is for JBoss, IBM, Oracle etc. Method level security and transaction settings? The problem it purports to solve can easily be solved in a number of different ways, so no thanks, I'll just take the object persistence, no JBoss thanks.

    I am not saying that containers are useless, but forcing it on projects that want object persistence is a sure-fire way to make sure that that persistence standard will never go mainstream, even if it is a wonder of power and simplicity, which I doubt EJB3 will be, what with all the inherited crap from it's hunchbacked ancestors. Not to mention that projects will shun it anyway thanks to the stigma that goes with the name.

    I am curious though, what direction will the Hibernate project take, more specifically, is a JDO version in the cards?
  34. missing the point[ Go to top ]

    Firstly, this is missing the point. What is needed is to persist objects, not have a data model for its own sake.
    This is just completely wrong. The data model lasts much longer than the application.
    Wrong, Mr King. The object model is more fundamental to the application than the underlying format used by a persistent object store.
    The object persistence crowd has always seen ORM as a poor substitute for flawed, failed OODBMS technology.
    ORM is a way to transparently persist objects, and is the most popular one. You are one of the transparent object persistence crowd, Mr King. Transparent object persistence is not tied to RDBMS or OODBMS, as you know. Your OODBMS reference is quite disingenuous.
  35. missing the point[ Go to top ]

    Wrong, Mr King. The object model is more fundamental to the application than the underlying format used by a persistent object store.
    In most of the applications I've been involved in over the years, the data model has been required to be accessed by many different sorts of processes - and not just Java ones. In such a world there are many code bases, all hitting various overlapping databases. The lesson from this is clear - the data model transcends the needs of a single application, and the object model of a single application should not drive your data modelling, but the aggregate needs of all of them.

    On top of the above - in many applications the data also outlives the code by a wide margin. This isn't true in something like a daily trading system, but it is true for systems that store much more historical data. Once again - the data model is more important than any given application, doubly so in this scenario since the data will likely outlast the app!

    The viewpoint you're talking about is an isolated system with dedicated access to a data store. Such beasties do of course exist - but the more you move towards an "enterprise" model, the less likely it is that you're involved with one.

    Most people who don't care much about data modelling in fact seem to come from a world where they own the database, and nobody but the developers for a single application ever need to store data to it and get data back out. In such a world an OO database, a simple file store, or just about anything is usually sufficient, and in fact you can match your data closely to your object model to get some development benefits.

    But once you lose ownership of the data - once the data is used by more than one application - then your assumptions go out the window. In business terms, in that scenario, the code isn't (quite) as important as the data, and in many cases the code can be highly ephemeral compared to the life of the data.

    The quintessential hater of databases, and lover of file stores and OODBMS' are the old GUI guys doing Smalltalk. They fell in love with things like Gemstone/S, in fact loved having data and code inseperable whenever possible. And it's true to an extent - doing everything in your app objects is natural and easy.

    But what if you don't have the only app that touches the data? What if two apps have radically different notions of what their object models should be but need to get at the same data? What if 8 such apps do? What if the business wants to plug in third party tools? This is where plain old RDBMS' shine - allowing fast access and efficient storage to data in an application-neutral format.

        -Mike
  36. missing the point[ Go to top ]

    Wrong, Mr King. The object model is more fundamental to the application than the underlying format used by a persistent object store.
    the data model transcends the needs of a single application, and the object model of a single application should not drive your data modelling, but the aggregate needs of all of them.
    In your thinking, an object model is associated with one application. That you have witnessed or participated in this practice I do not doubt at all. This practice is seriously suboptimal, as you notice. Instead of this practice, the object modeling should be done for whole domains i.e. domain object modeling. All applications using the domain use its domain object model. You would discourage application specific data models, yes? Equally, application specific object models are to be discouraged.
  37. Domain Object Model[ Go to top ]

    the object modeling should be done for whole domains i.e. domain object modeling. All applications using the domain use its domain object model. You would discourage application specific data models, yes? Equally, application specific object models are to be discouraged.
    Further to that - if you are unfamiliar with the term "Domain Object Model" and are more familiar with data modeling, the nearest equivalent in the data modeling world is subject data model.
  38. missing the point[ Go to top ]

    In your thinking, an object model is associated with one application. That you have witnessed or participated in this practice I do not doubt at all. This practice is seriously suboptimal, as you notice. Instead of this practice, the object modeling should be done for whole domains i.e. domain object modeling. All applications using the domain use its domain object model. You would discourage application specific data models, yes? Equally, application specific object models are to be discouraged.
    Sounds nice. I haven't seen this work in real environments, though. Multiple applications general means multiple teams doing the implementation, and multiple reporting chains into management, usually different ones. This is not a recipe for people to adopt a common domain model beyond what common data sources may mandate (e.g. the database is the only real enforcer).

    Oh, I've seen efforts to do what you're talking about, typically firm-wide initiatives to add consistency at some level, often with a grand uber architect or five running the effort. These kind of efforts typically become complete boondoggles - multi-year efforts that suck money and resources that rarely if ever actually complete anything, and which more often than not are the local laughingstock. In some companies, this is the group they put you in to send you a message to start looking elsewhere for a job (I kid you not!).

    Beyond logistical and political problems, there are also real problems with ownership, turn around time, and differing requirements. As one example, financial trading systems, 5-15 applications can easily be operating with the same trading data, but each is working from radically different requirements - the front office trading desk systems, middle-office checker systems, ticket printers, back office trading feeds, trade matchers, pricing analytics engines, historical audit systems. Some of these may be written in Java, or some .net thing, or some legacy windows thing. Some may be written in C++, some in C, some in Java, some in C#. Some may even be written in proprietary languages. It's a struggle just to get such disparate applications to work with a common data model (and people often give up and write local "scratch" databases and feed into the "books and records of the firm"). But ultimately the firm will mandate the data model to some extent because it is so pervasive, and so many different processes will need access to it. Trying to mandate an object domain model on top of it? It's just not worth it.

         -Mike
  39. Domain Object Modeling[ Go to top ]

    In your thinking, an object model is associated with one application. That you have witnessed or participated in this practice I do not doubt at all. This practice is seriously suboptimal, as you notice. Instead of this practice, the object modeling should be done for whole domains i.e. domain object modeling. All applications using the domain use its domain object model. You would discourage application specific data models, yes? Equally, application specific object models are to be discouraged.
    Sounds nice. I haven't seen this work in real environments, though.
    I entirely believe that statement. What you have seen is enterprise object modeling, in which the same approach that is all too often used for application object modeling is attempted for the whole enterprise. The failure rate of enterprise object modeling is no better than that of enterprise data modeling. Enterprise approaches in which user concepts are directly represented and then merely categorized spin out of control due to unresolved complexity. Domain object modeling is not enterprise object modeling. In domain object modeling the regular properties of a domain are defined more formally than patterns. Note that the roles of domain expert and domain modeler are quite distinct, and a domain model will refer to objects with which the domain expert is initially unfamiliar and is a more abstract statement of regular properties that support standard algorithms.
    I know the kind of environment you are talking about, and you have my sympathies. Lack of lateral thinking is entrenched. Don’t give up. There are better ways.
    As a first illustration of new thinking in design, have a look at the concepts of the Rosetta System Level Design language (http://www.sldl.org/).
  40. Domain Object Modeling[ Go to top ]

    G, I think you are missing the point here by a very, very wide margin.

    There are many situations where:

    • Applications must share data
    • There is often alot of such data
    • The applications are controlled by different groups, with different approaches, policies, and political realities
    • Often these varying applications are done in radically different languages, or with radically different designs
    • And they still have to share data
    Your response keeps referring back to design and architecture level terminology. "Enterprise Object modelling", "application object modelling", "Domain object modelling". A model is just that - merely a model, an abstraction and approximation of the real world.

    In this discussion I am not talking about models or design methodologies - I am talking about the simple question "How do we share data?". Not in modelling terms, but in implementation e.g. physically, how do we share data?

    The additional fly in the ointment: how do we share data when the data itself is truly "shared" in nature, with no obvious controlling authority for that data, and where _no one_ has universal control over the various applications and codebases?

    This is a fundamental point that you are completely avoiding - how do you get your job done and meet the aims of your group while also satisfying the larger aims and requirements of the larger business? There are multiple requirements, multiple business and technical units, and sometimes conflicitng goals. There is a constant balancing (and fighting!) between local needs and increasingly more global needs. In such an environment - which I posit is a common one - there are complex relationships between groups and a dizzying array of technologies in use. What you keep falling back on is the concept that central control is possible (which it usually isn't, at least not sufficiently), or a the idea of all groups cooperating seamlessly with good will. In reality, neither of these will hold. Groups will come in conflict with each other, technology changes, business goals will contradict each other at different levels (e.g. the needs of the front office trading people will rarely dovetail nicely with those of the back office people, and there is no permanent "authority" that can dictate a universal architecture). Indeed, how does one define interacting domain models when some of the languages involved are half a million lines of legacy code in a non-OO language?

    Despite all of the negatives I've listed above - somehow, people still create systems that work. The successful systems are ones where people have found a way to share data and still get their local jobs done. This may involve hijacking someone else's responsibility, setting up local databases, perverting one or more feeds, negotiating changes in existing services/systems/APIs, or perhaps hooking in relatively painlessly into these constructs without any radical change.

    One unifying theme of this sort of environment is that business is and has been typically on-going for some time. Developers rarely create a truly "new" system in the sense of adding completely new business functionality. More often than not, they are re-writing some old legacy system for updated technology, or to add some new features to reflect changing needs. This means that developers in these environments more often than not are doing "new" development - but said development is actually re-doing something that has been done before. As a result, there are guaranteed to be "N" legacy systems that they have to interact with - possibly including the system they are replacing.

    Some of what you are talking about applies most commonly to relatively new businesses which have a fairly uniform IT structure, and not too much legacy cruft to get in the way. An Amazon, or an eBay, may have a domain model, IT policies, languages, etc that are relatively uniform. But I'm not talking about that. Think of, for example, any number of brokerage houses or banks on Wall Street. These are companies that have been in business for many decades, and have been deploying computerized systems of their own since at least the 70's. They are generating billions of dollars of revenue every year right now with the systems they have. Of course they update and replace such systems - but these activity _must_ happen in the context of the on-going billions of dollars worth of transactions going through their systems every day. In such an environment abstract ivory tower modelling takes a back seat to making $$$$. To put it another way, the business is more than willing to tolerate a large degree of "dirtiness" in their designs if it keeps a few billion dollars a day flowing through their networks.

        -Mike
  41. Domain Object Modeling[ Go to top ]

    Hi Mike

    When a Java application needs access to a datasource the designer must choose the most appropriate means for this access. It might be that the data can be accessed through a direct connection to the datastore. It might be that access is provided only through a service which encapsulates the datastore connection. It might be that there are various technologies in the mix.

    Depending on the particular scenario you may choose to build a Java-based domain model or not. If your access is through another application's or technology service, a domain model might be harder to envisage.

    And just because the Java application chooses to model the domain, doesn't mean that suddenly all other technologies accessing the shared data must use an equivalent model.

    I presume you asked this question for the benefit of others, since in your position you should be well versed in the various options and tradeoffs. By the way, is your position with Core Developers' Network or with JBoss Group at the moment? With all the comings and goings I can't remember.

    Kind regards, Robin.
  42. Domain Object Modeling[ Go to top ]

    When a Java application needs access to a datasource the designer must choose the most appropriate means for this access. It might be that the data can be accessed through a direct connection to the datastore. It might be that access is provided only through a service which encapsulates the datastore connection. It might be that there are various technologies in the mix.
    Um, OK. I can't really refute the above, but at the same time the above could be interpreted in many ways, depending on the details :-)
    Depending on the particular scenario you may choose to build a Java-based domain model or not. If your access is through another application's or technology service, a domain model might be harder to envisage.

    And just because the Java application chooses to model the domain, doesn't mean that suddenly all other technologies accessing the shared data must use an equivalent model.
    The original argument was whether "data" or code was more important, with myself observing that data has a wider applicability than code. In that context, the question becomes whether the Java solution has exclusive access to a set of data or not. If it does, it really doesn't matter in the grand scheme of things what you use - the designer, as you say, just uses their judgement for that one application.

    But the interesting bit is when a Java solution does _not_ have exclusive access to a set of data. This is a reality, a constraint if you will, that is extremely important, and yet many people seem to conveniently ignore when they talk about things like domain models, OODBMS systems, etc. Quite often a Java system does have to interoperate with other systems, and alot of the talk here quite simply dissolves as irrelevant when this occurs. Many developers I've worked with actually laugh out loud when they read their discussions. Their responses can be paraphrased of "All that is very nice, well, and good, and I'm very happy for those lucky souls; as for me, I _have_ to work with systems X, Y, and Z and it's preordained how to do that".

    An interesting side case is when a development team fools themselves into thinking they have exclusive access to data when they do not, and with disastrous results. I've seen some groups either ignore reality, or not be aware of it, and go ahead and build a system based on an OODBMS, or some linked-in thing like Prevayler, or some similar technology, confident that _they_ have the power to choose what's right for them and to do what makes their own life easier. And then, as the system grows, they find out that they were wrong and they _do_ have to integrate with other systems - and find that there is no easy way to integrate their choice of datastore/model with what the rest of the firm is doing. These are the systems that look wildly successful in their initial trials and tests, and then start falling apart as reality descends and integration shows the weakness of their self-centric approach.

    This is why data models are important, and why RDBMS' have lasted for so long. It is proven that highly disparate systems can use an RDBMS as a common data store, and that such systems scale, and that the data captured within them often outlasts the first applications that were built with them.

    You can say that Java developers have "choice" when it comes to these matters - but it's a rather illusory one in the long run. People who "choose" not to use an RDBMS as the main backing store, with few exceptions, _inevitably_ end up regretting the decision if the system is successful (if it's not successful than the point is moot :-). A system that is succesful and grows will, by definition, be forced to integrate with other systems or die.
     By the way, is your position with Core Developers' Network or with JBoss Group at the moment? With all the comings and goings I can't remember.
    My "position"? I'm not sure what you mean by that. If you mean a "job", then it's neither. If you mean "philosophical preference", then you could say I prefer the CDN approach (while pointing out that it is not necessarily the best approach). If what you really mean is "do you still rant against JBoss, it's practices, and business model", then the answer is a solid "yes".

        -Mike
  43. Domain Object Modeling[ Go to top ]

    Hi Mike
    In that context, the question becomes whether the Java solution has exclusive access to a set of data or not.
    You go on to examine scenarios where the choice of datastore was made on the basis of information that proved to be incomplete.

    JDO lets you use the datastore of your choice. EJB 3.0 will force you to use a datastore for which an efficient JDBC driver is available. I see no reason for the community to accept that next generation of enterprise Java technologies to constrain deployment topologies in this way.

    So you used Prevayler (through its hypothetical JDO interface). So you used Versant. So you used Firebird. So you used Sybase. So what? If the application was designed with a domain model (anaemic or behaviourally complete) persisted through JDO then you can quite easily change to Oracle when you get told to do so. If you've had to exploit native capabilities of your datastore, then hopefully you've done so in a well-encapsulated manner which facilitates limitation of impact during the porting exercise.

    Kind regards, Robin.
  44. Switching datastores[ Go to top ]

    Sorry Robin, I have to disagree on this point. Switching from, say, Prevayler to, say, Oracle would not be all that easy for a real enterprise application. At best a well-designed encapsulation model might make such an effort _only_ several months, instead of years (or just plain impossible). I know one of the core ideas in JDO is being able to switch datastores, but it just isn't practical IMHO for many complex applications. For some, yes, but for apps I would call "interesting" it isn't.

        -Mike
  45. Domain Object Modeling[ Go to top ]

    And just because the Java application chooses to model the domain, doesn't mean that suddenly all other technologies accessing the shared data must use an equivalent model.
    What worries me is, what happens in that scenario with data *consistency*. Suppose we are a start-up and we have the luxury of drawing up a grand OO-design, so our Java app will be based on a Domain Model. Some years down the line new apps come up, and some of these will want to acess the *data*, and not necessarily through the domain model. Doesn't this open many possiblities to break the original Java app, since it relied in some data consistency which is enforced by the behavior of the domain model?

    I thought one of the strong points of a RDBMS lied exactly in this, in enforcing a consistent state for data, irrespective of the *use* that any application made of that data.
  46. Domain Object Modeling[ Go to top ]

    Hi Jose

    Where to put the validation constraints? This discussion is certain to be as enduring as the data itself. You can put all of your constraints (from FK constraints to property validation rules to collaboration rules) into the database if your database is capable.

    <terminology>
    FK constraint: e.g. the product identifier in the order line item entity must reference a valid product in the product entity.
    Property validation: e.g. the quantity of the product cannot be zero or negative.
    Collaboration rule: e.g. dynamite may not be in the same order as matches.
    </terminology>

    Your domain model may be very state heavy, allowing validation to occur on flush. JDO will faithfully persist this object model. Validation failures will only be available at flush time, usually on commit.

    You could choose to put corresponding validation into the domain model. This probably means validation logic will be duplicated in two different tiers (domain and datastore), and inevitably the languages with which enforcement is achieved is different so this is real duplication not reuse. However the application can fail-fast on the setting of an invalid property or the incorrect establishment/dissolution of a collaboration.

    You can mix and match. Concrete validation goes into the database, e.g. FK and basic property checks such as non-negative, explicit cardinalities, etc. But then put collaboration logic into the domain model only.

    The question is one of where you want failures to be detected, and how much you are prepared to duplicate if you have cause to establish and update equivalent datasets through a technology which preludes reuse of the Java domain model.

    The important thing is that the designer has the choice. JDO will faithfully persist the Java object model, reporting any datastore constraint failures and rolling back as expected.

    Finally, use of JDO to map an object model to any given datastore does not preclude simultaneous and transactional interactions with that datastore by other technologies. JDO generally delegates concurrency control to the datastore. So your COBOL, C, C++ etc. systems can happily use the datastore at the same time.

    You have the choice, and are not constrained in that choice by JDO.
  47. Domain Object Modeling[ Go to top ]

    Where to put the validation constraints? This discussion is certain to be as enduring as the data itself.
    Yep, and your remaining comments are quite true. It can to very difficult to know the right place to put behavior, particularly validation behavior. In many environments its much more complex than just a Java app talking to some (switchable) datastore, and deciding who-does-what is a career in and of itself.

        -Mike
  48. Domain Object Modeling[ Go to top ]

    Mike, the people who keep those collections of legacy applications that you speak of going are, in their own way, quite heroic. I understand and accept that there are no prospects for overall change and improvement in the environments you describe and the only prospects for something that works well are at the very local level and even then it is quite a struggle against the rest of the environment.

    Looking at the larger business picture, a company that can commit to object orientation and new applications across the board (such as a new company) is at a real competitive advantage. Nicolas G. Carr is quite mistaken in saying that technology does not provide competitive differentiation. If he understood the environments that you describe as well as you do, he would be quite sure that there is major competitive differentiation resulting from how companies use technology.

    The environments that you describe greatly over value their legacy applications. Accounting uses the cost of building the application as its value, and so is of the view that it has great value in its legacy apps. Also, large numbers of lines of code is no testament to the value of an application. The environments that you describe assume that the (estimated) value of their applications will save them from new competition.

    For the benefit of casual readers, we have digressed from anything to do with JDO 2.0/EJB 3.0 here. This discussion does not weigh in on JDO 2.0 versus EJB 3.0 .
  49. Domain Object Modeling[ Go to top ]

    Looking at the larger business picture, a company that can commit to object orientation and new applications across the board (such as a new company) is at a real competitive advantage. Nicolas G. Carr is quite mistaken in saying that technology does not provide competitive differentiation. If he understood the environments that you describe as well as you do, he would be quite sure that there is major competitive differentiation resulting from how companies use technology.
    Oh boy. G, the places where I work manage transactions in the hundreds of millions of dollars into the trillions - a single fixed income trade might be $100MM US. Their revenues are in the billions, and sometimes their profits are, too.

    These companies have systems and environments precisely as I have described. And these companies, brokerage houses and banks, are among the bleeding-edge leaders in technology adoption and their astounding success can be directly tied to that aggressive use of technology.

    But as a rule they do not approach technology dogmatically as you do, but rather with a very practical, pragmatic viewpoint: the technology is there to make them money, period. And they are continually learning how to use technology in very high volume, very time senstive, and very failure sensitive environments. And, above all, they also understand risk and its applications. These are companies that have lost millions of dollars due to bugs - directly. These are companies that have thrown $50 million into a new development effort and get $0 out of it. And these are companies that have also embraced technological change and the advantages it conveys and brought in trillions of dollars in revenue because of it.

    These companies succeed in an environment, and with systems, that you are directly denigrating. By your measure, they should have failed long ago - but they have not. The reason why is simple: such companies have learned, for the most part, how to run software development projects. And among the top 5 lessons, you will _not_ find "Object Orientation", domain modelling, or anything else like it. Instead, the lessons are more along the lines of:

    • Hire people who have proven they are good at software development, high volume, time sensitive, and failure sensitive systems. Give them boatloads of money in return for their efforts (a really good developer in finance can bring home a high five figure or even six figure _bonus_).
    • Test the hell out of everything
    • Test the hell out of everything with real data in as close to an environment as production as humanly possible
    • Failure and recovery, throughput, and latency are all _equally_ as important as normal functional concerns.
    • Choose to buy when "buying" has extremely clear benefits. If the benefits aren't clear, don't be afraid to build your own.
    • Always have a fallback plan when deploying something new
    • Learn every new technology under the sun and understand it inside-out, backwards, forwards and sideways. This does not mean you adopt everything new - new for newness sake has lost more dollars than just about any other mistake in IT worldwide. It means understanding your options.
    • The code you're running and maintaining and upgrading today will eventually die and be replaced. Get over it.
    • Most software developers lie :-) Never trust a developer until you've tested their code under real conditions.
    The above is the formula that Wall Street has been applying for decades to technology - and these lessons learned in the 70's and 80's still apply today. Yes, I know what some people are thinking - I work in XYZ group for ABC bank and everything we do sucks donkey eggs. To those people I say "OK - but where are you succeeding? What does your system(s) do well?". And "are you actually working on anything which makes money?". Large banks and brokerages have big bureacracies just like any other companies, and there are alot of dead end apps and dead end jobs where little gets done, little works, and few people seem to care. But, in my experience, all this changes when you get close to the money. Get close to a real trading system and the information that flows through the firm and _usually_ you will see an abrupt change, and alignment along the lines of the points I outlined above.

    Your posts here basically are posting the environments I've talked about in a very negative light, and you mention that companies that follow your advice should be more competitive. But at least in financial services, they're not. The reason why goes back to what I said: the point of the business is the busines, and software must first and foremost support that. As one example, at one placed I worked there were tons of COBOL code which fed the Federal Reserve Wire. As far as I know, it's still there. Why? It's simple: that code is very complex, and has been grown over literally decades, and _it works_. And in case you didn't know, the Fed Wire is crucial to the business - if trade information doesn't go out on the wire, _it doesn't exist_ and you lose millions. These companies understand risk management, and they understand that the risk of dumping this software and re-writing it from scratch is just too high for them to tolerate. Some day that risk assessment will change, and that code will get dumped, but it hasn't happened yet. These systems are pumping trillions of bucks out, and if a new system dropped just one tenth of one percent of that volume through some bug the companies would be devestated.

        -Mike
  50. Domain Object Modeling[ Go to top ]

    are among the bleeding-edge leaders in technology adoption
    Having the technology somewhere doesn't mean you are using it well.
     and their astounding success can be directly tied to that aggressive use of technology.
    That sounds like it came of the marketing department, as does most of the rest of your send.
    But as a rule they do not approach technology dogmatically as you do
    I see you are getting personal. The dogma is yours. Improvement is my thing. Has a nerve been touched?
    Your posts here basically are posting the environments I've talked about in a very negative light
    Mike, you have done that all by yourself.
  51. expensive mistakes[ Go to top ]

    Making expensive mistakes does not mean that insightful thinking or anything more than the most superficial learning will occur.

    After all you have said about these environments it is absurd to turn around and say that they know how to run software development projects.
  52. think[ Go to top ]

    That large sums of money are involved in a project does not imply that the money is well spent.

    Also, the need for pragmatism does not imply that lateral thinking is invalid.
  53. economics[ Go to top ]

    Looking at the larger business picture, a company that can commit to object orientation and new applications across the board (such as a new company) is at a real competitive advantage. Nicolas G. Carr is quite mistaken in saying that technology does not provide competitive differentiation. If he understood the environments that you describe as well as you do, he would be quite sure that there is major competitive differentiation resulting from how companies use technology.
    By your measure, they should have failed long ago - but they have not.
    That anti-competitive factors may arise is not a well-kept secret.
    But, in my experience, all this changes when you get close to the money.
    The free market only works when the sums of money become large?
    That's a new one! Spille's law of economics? :-)
  54. economics[ Go to top ]

    Yo, G Money, how many times you gonna reply to the same post??? I count 3 so far....
    That anti-competitive factors may arise is not a well-kept secret.
    You sure it's "anti-competitve factors"? You savvy wall street and the various companies involved there-in? Is it an anti-competitive environment that's causing their success, or are they just very good at what they do?

    Seems to me it has more to being good then "anti-competitive factors". When a brokerage or bank goes bad, even in IT, it tends to go down or gets bought. Ever hear of Kidder Peabody? :-)
    The free market only works when the sums of money become large?
    That's a new one! Spille's law of economics? :-)
    I'm trying to figure what this has to do about my comments. My comments were talking about how things work inside of large banks and brokerage houses. Such environments are most emphatically _not_ a "free market".

    In these environments, the closer you are to the money the more critical it is that the code works, is fast, and can tolerate and recover from errors. And typically the better compensated you'll be. If you're on a team delivering a front-office fixed income trading system, you're coding a system that handles trillions of bucks per year. There's enormous pressure to make sure you get it right.

    On the other hand, you may be working on a system three times removed from the money. Banks and brokerages have all sorts of soft and fuzzy systems which don't directly hook into the money flow, and have correspondingly much less of the spotlight. It's perceived in the company that these systems have value, but they are not focused upon the same way pricing systems, trade flow systems, etc are.

    Just to be clear, I'm trying to keep my comments grounded in an industry I know, and to be specific whenever possible. So what I'm describing is what I've seen in real life in various financial services companies. In these environments, systems _do_ need to interoperate, and data (or at least data models) often outlast code. But at the same time, interoperation is often curtailed or blocked by "other considerations" :-)

    So that's where I'm coming from. I admit that I've had some trouble debating this with you, G, because you've remained vague and generic in your posts. I have no idea if you're talking about a simple 20,000 line web app in a dot com startup, a boondoggle reporting system in a research and development company, an accounting system in an insurance company, or if you're an academic telling me what the books say :-)

    So if you want to take this further, give me an industry, give a sense of how it works in your view, and for gosh sakes stop speaking in the abstract and give some concrete examples (and thanks to Henrique for doing exactly that!).

         -Mike
  55. across industries[ Go to top ]

    So if you want to take this further, give me an industry, give a sense of how it works in your view, and for gosh sakes stop speaking in the abstract and give some concrete examples
    I am interested here in what is true across industries. The view that each industry must entirely remain in its own IT world and no approach can be generally applicable is way too parochial. I am sure that industry-independent techniques seem too abstract to you. You will try to give the impression that such techniques are anti-business.

    As for taking it further, I definitely would but unfortunately priorities will take my time elsewhere for some weeks. I will miss the opportunity to identify all of your little manipulative patterns! :-)
  56. across industries[ Go to top ]

    Ah, someone asks for specifics and you suddenly get antsy. You'll probably get all pissed off at me for saying this, but I generally interpret this to mean that you only have generalities and don't have any specifics.

        -Mike
  57. abstraction[ Go to top ]

    Ah, someone asks for specifics and you suddenly get antsy. You'll probably get all pissed off at me for saying this, but I generally interpret this to mean that you only have generalities and don't have any specifics.
    I see that you are not comfortable with abstraction. Looking back, that would explain all you have been trying to do. I guess you are just not good at it. Abstraction in IT isn't going away, so try to make your peace with it.

    There is value is having got to the bottom of this, but I really must attend now to my priorities.
  58. abstraction[ Go to top ]

    Yep, you're absolutely right G. You started posting to TSS a whole 3 days ago, and in that brief time you've found me out - I don't know crap about abstraction. We should all emulate you and be sure never, ever to get our hands dirty with jobs, or industries, or implementations - they get in the way of all of the lovely abstractions!

    You clearly have a great job in a fascinating company that allows you to always be generic and never specific. You must be the proud owner of many thousand page documents.

    Programming and writing code in IT isn't going to go away - for your sake I hope no one finds you out!!

    Let me know when you get over your fear of revealing things like your first name, the industry you work in, or any other specifics.

        -Mike
  59. abstraction[ Go to top ]

    I don't want to let both of you down, but you both are wrong: in IT, there must be abstraction (so you get a clear view of what you need), and there must be specifics (so you get a clear view of how to do it). IHMO, despite what extreme programming and other agile development paradigms say, both can't and should't exist without the other.

    Regards,
    Henrique Steckelberg
  60. abstraction[ Go to top ]

    Sure. I wasn't saying abstraction was everything, I was saying it is indispensable.
  61. abstraction[ Go to top ]

    I don't want to let both of you down, but you both are wrong: in IT, there must be abstraction (so you get a clear view of what you need), and there must be specifics (so you get a clear view of how to do it). IHMO, despite what extreme programming and other agile development paradigms say, both can't and should't exist without the other.
    I agree 100%. My point, which I badly worded, was that you cannot discuss abstractions without specific concrete examples. It's one of the few areas where I agree with Stroustrup - you cannot define an abstraction until you have experience in several concrete examples first. What I found here is that the abstractions being put forth were meaningless without concrete reference points and examples - and thanks for the telecomm examples on your side!

        -Mike
  62. blog[ Go to top ]

    Mike, I just noticed you have a blog and took a couple of minutes out to zip through it. The personal background was unexpected. I had previously imagined that you were entrenched in Wall Street as a Master Of The Universe arrogantly demanding that all should give up thinking for themselves, and it thus seemed that you were fair game for full-on tackling. In the light of your blog, I see that's wrong. Sorry. The cats thing too helped too - I adore cats :-)
    Apologies again.
  63. integration[ Go to top ]

     such companies have learned, for the most part, how to run software development projects. And among the top 5 lessons, you will _not_ find "Object Orientation", domain modelling, or anything else like it.
    Nor application integration, by your account.
    Instead, the lessons are more along the lines of:
    • Hire people who have proven they are good at software development, high volume, time sensitive, and failure sensitive systems. Give them boatloads of money in return for their efforts (a really good developer in finance can bring home a high five figure or even six figure _bonus_).
    • Test the hell out of everything
    • Test the hell out of everything with real data in as close to an environment as production as humanly possible
    • Failure and recovery, throughput, and latency are all _equally_ as important as normal functional concerns.
    • Choose to buy when "buying" has extremely clear benefits. If the benefits aren't clear, don't be afraid to build your own.
    • Always have a fallback plan when deploying something new
    • Learn every new technology under the sun and understand it inside-out, backwards, forwards and sideways. This does not mean you adopt everything new - new for newness sake has lost more dollars than just about any other mistake in IT worldwide. It means understanding your options.
    • The code you're running and maintaining and upgrading today will eventually die and be replaced. Get over it.
    • Most software developers lie :-) Never trust a developer until you've tested their code under real conditions.
    The above is the formula that Wall Street has been applying for decades to technology - and these lessons learned in the 70's and 80's still apply today.
    What you list is quite basic. I suggest that there is more to learn.
    the point of the business is the busines, and software must first and foremost support that.
    Of course. I wonder what I said that led you to the conclusion that I was of a different view?
    As one example, at one placed I worked there were tons of COBOL code which fed the Federal Reserve Wire. As far as I know, it's still there. Why? It's simple: that code is very complex, and has been grown over literally decades, and _it works_.
    Things growing very complex has a tendency to be at odds with their continued working. So far, your suggested solution is a secret X factor, possibly some kind of right stuff.

    These systems are pumping trillions of bucks out, and if a new system dropped just one tenth of one percent of that volume through some bug the companies would be devestated.
    That there is motivation for and effort in pursuit of program correctness is clear. This is a distraction from the actual debate, however.

    Your writing is quite slick at giving the impression that the person with whom you are arguing holds certain beliefs which in reality they do not hold at all.
  64. integration[ Go to top ]

    Your writing is quite slick at giving the impression that the person with whom you are arguing holds certain beliefs which in reality they do not hold at all.
    That is not my intent. Be more specific, as I mentioned in my previous post, and perhaps this can be avoided.

        -Mike

    P.S. That's 4 replies to one post!
  65. slick[ Go to top ]

    Your writing is quite slick at giving the impression that the person with whom you are arguing holds certain beliefs which in reality they do not hold at all.
    That is not my intent.
    Nonsense :-)
    That's 4 replies to one post!
    Aha, but look at the total amount of text written!
    You have spilled out much more text than I :-)
  66. Domain Object Modeling[ Go to top ]

    Mike, a possibility that doesn't depend on RDBMS occurs to me that may in future ease your pain in the environments you describe. Work with me here.

    Okay, for reasons of politics/lack of team playing/lack of insight/legacy/etc, let's say that your situation concerning devising object semantics that have more than local meaning is indeed hopeless.

    Notice what is coming with Service Data Objects. SDOs are data-graph oriented. Sure, they are talking about shipping SDOs around in XML, but that doesn't mean that SDOs are hierarchical. In fact, SDOs have the full flexibility of object graphs.

    SDOs themselves are not persistent, but as the SDO concept becomes established what we will see is what might be termed Data Graph DBMSs. I will use the term GDBMS for now. The broad SDO term including what I am calling GDBMS is data mediator. The GDBMS delivers SDOs on request. Early GDBMS might be built on RDBMS. Even in a native GDBMS implementation, SQL access will be supported for those applications that have already been hardcoded to use the relational model. So what you would do for new apps is have them use SDO. The optimistic transaction model used with SDOs is suitable for the environments you describe. The data graph model is then the corporate standard with legacy apps that are hardcoded to use the relational data model still supported. There is less work for vendors to do in mapping object graphs to data graphs than there is in mapping object graphs to relations, and the object to relational mismatch is avoided. DBMS designers can pursue performance advantages that arise from integration in the DBMS of graph data model and relational data model support. Note the claims in a similar regard (object, relational and multidimensional integration in the DBMS)from InterSystem for its Cache product and its performance (http://www.intersystems.com/cache/).
  67. GDBMS[ Go to top ]

    Notice what is coming with Service Data Objects. SDOs are data-graph oriented. Sure, they are talking about shipping SDOs around in XML, but that doesn't mean that SDOs are hierarchical. In fact, SDOs have the full flexibility of object graphs.SDOs themselves are not persistent, but as the SDO concept becomes established what we will see is what might be termed Data Graph DBMSs. I will use the term GDBMS for now. The broad SDO term including what I am calling GDBMS is data mediator. The GDBMS delivers SDOs on request. Early GDBMS might be built on RDBMS. Even in a native GDBMS implementation, SQL access will be supported for those applications that have already been hardcoded to use the relational model. So what you would do for new apps is have them use SDO. The optimistic transaction model used with SDOs is suitable for the environments you describe. The data graph model is then the corporate standard with legacy apps that are hardcoded to use the relational data model still supported. There is less work for vendors to do in mapping object graphs to data graphs than there is in mapping object graphs to relations, and the object to relational mismatch is avoided. DBMS designers can pursue performance advantages that arise from integration in the DBMS of graph data model and relational data model support. Note the claims in a similar regard (object, relational and multidimensional integration in the DBMS)from InterSystem for its Cache product and its performance (http://www.intersystems.com/cache/).
    An amendment: a GDBMS will support both optimistic transaction and pessimistic transactions for data graphs. If using SDO you use optimistic transactions of course. If using transparent persistence you have the choice of optimistic or pessimistic transactions.

    When time permits I will have a look around at what developments are supportive of the GDBMS idea, and maybe prod some people into thinking more about it.

    Mike, thanks for compelling me to consider your situation (I mean that).
  68. Domain Object Modeling[ Go to top ]

    In your thinking, an object model is associated with one application. That you have witnessed or participated in this practice I do not doubt at all. This practice is seriously suboptimal, as you notice. Instead of this practice, the object modeling should be done for whole domains i.e. domain object modeling. All applications using the domain use its domain object model. You would discourage application specific data models, yes? Equally, application specific object models are to be discouraged.
    As one example, financial trading systems, 5-15 applications can easily be operating with the same trading data, but each is working from radically different requirements - the front office trading desk systems, middle-office checker systems, ticket printers, back office trading feeds, trade matchers, pricing analytics engines, historical audit systems.
    By the way, the answer to this object design problem is straightforward and does not require new object design techniques. I copied the following text from a send of Robin's on the JDO 2.0 and EJB 3.0 thread because it also gives you the the solution you need:
    Symptomatically, if you have different applications working on the same model and domain-relevant behaviour is being duplicated then it should be in the domain objects themselves. Equally, if you have behaviour in the domain objects used by one application, and this behaviour is counter to the requirements of a new application using the same domain, then you've inadvertently put application-relevant behaviour into the domain object model and this must be corrected.
  69. Domain Object Modeling[ Go to top ]

    I see this thread is interesting too :)
    So, is it something wrong to keep domain model in database ? Do we need EJB 3 and JDO 3 just for evrything is JAVA and OOP dream ? A lot of client applications implemented in diferent programming languages use the same "enterprice" database and OOP + JAVA needs to kill perl on crontab, reports, MS Office, ... before to sell this dream as standard for enterprise.
    By the way, the answer to this object design problem is straightforward and does not require new object design techniques. I copied the following text from a send of Robin's on the JDO 2.0 and EJB 3.0 thread because it also gives you the the solution you need:Symptomatically, if you have different applications working on the same model and domain-relevant behaviour is being duplicated then it should be in the domain objects themselves. Equally, if you have behaviour in the domain objects used by one application, and this behaviour is counter to the requirements of a new application using the same domain, then you've inadvertently put application-relevant behaviour into the domain object model and this must be corrected.
  70. Domain Object Modeling[ Go to top ]

    Juozas

    The persistent Java domain object model is represented as data in the database. This data is available to any technology which may access that database.

    Whilst working from Java technology it makes sense for the domain objects to hold domain-relevant logic. It makes sense for Java applications to manipulate these objects through a level of abstraction which isolates most of the application from any brittlenes arising from intimacy with the database schema.

    While working from another technology use whatever you believe to be the most appropriate mechanism provided for accessing the data, whether in its table/column form or as an object model expressed in that technology's programming language.

    With JDO the underlying data access, locking etc. is serviced by the database. So transaction isolation transcends the various technologies which may simultaneously be accessing the underlying data.
  71. Domain Object Modeling[ Go to top ]

    Symptomatically, if you have different applications working on the same model and domain-relevant behaviour is being duplicated then it should be in the domain objects themselves.
    I see. So, to pick an example out of thin air, Goldman Sachs should re-write 2 million lines of bastardized COBOL, C, C++, and perl code because some ivory tower domain modeller says "behaviour is being duplicated". And a single individual or tight group of individuals has the power within the organization to mandate this, make it stick, and build a real system out of it.

    LOL!
    Equally, if you have behaviour in the domain objects used by one application, and this behaviour is counter to the requirements of a new application using the same domain, then you've inadvertently put application-relevant behaviour into the domain object model and this must be corrected.
    OK, so there's an existing trading system written in a mix of C and C++. A new Java/J2EE thingy comes along with, say, 25% overlap with the C/C++ trading system.

    So tell me - how do we partition behavior and data between these two systems written in radically different languages? Assume, as is common, that the C/C++ guys are in a different reporting chain than the new J2EE thingy. The J2EE domain modeller says "there's too much behavior in the domain model" (whatever that means). So tell me - how do the J2EE guys convince the C/C++ guys to change their data models and code?

        -Mike
  72. missing the point[ Go to top ]

    I beg to differ: I work at a major telecom company down here in Brazil, and the fact that there is not a clear data model for the company`s business has created lots of problems. There is a drive in the company now to integrate systems and databases, in order to make the company more flexible and faster regarding deployment of new products, provisioning and maintenance. We used to have specific systems for different technologies, and all the scenery Mike described: different languages, databases, platforms. But now that everyone is seeking to integrate all these systems and databases, we are running into a big wall: each system has a different view of the information, thus simple things, like what name to give to a site, has been a major pain to resolve. Different systems identify sites differently, so we actually ended up having different invetories of sites, one for each system, and to unite that in one coherent and complete database, will take months of hand work. Ugly.

    Now, if we had a defined data model for the whole company, even a very high level one, this would certainly be minimized. The main entities should be defined, its key informations and relationships, and make that a guide to every system developed in the company: that would have made integration so much easier and faster. Having an architectire group inside company`s IT and a well defined data model would have saved us lots of money and time indeed, I can`t stress that enough.

    Regards,
    Henrique Steckelberg
  73. missing the point[ Go to top ]

    I understand what you're saying Henrique, but I don't agree with your conclusion in the paragraph:
    Now, if we had a defined data model for the whole company, even a very high level one, this would certainly be minimized. The main entities should be defined, its key informations and relationships, and make that a guide to every system developed in the company: that would have made integration so much easier and faster. Having an architectire group inside company`s IT and a well defined data model would have saved us lots of money and time indeed, I can`t stress that enough.
    I have seen what you are describing attempted, and at least where I have worked it has universally failed. Some of the problems:

    • In the time it takes to do this level of definition and modelling across many disparate code bases and applications, the business and technology have moved significantly. If this sort of effort ever delivers, it often delivers even later and less capable than the dismal track record IT has in general.
    • People defining cross-system models always lack specific domain-specific knowledge in several key applications. Typically they are an expert in one or two domains, and the model favors those domains at the expense of others. Many times the universal model will treat one or two applications or domains so badly that the groups in charge of those areas will mutiny against the corporate standard.
    • People waste a great deal of time bemoaning the past and where it has lead them today. They bitch and curse and dream of better worlds, say over the water cooler "If only they had...". Get over it - study the past but don't become obsessed with it. Today and tomorrow are where the action's at.
    • The clincher: what's good for programmers is not necessarily good for the business as a whole. The measure of IT success is its ability to serve business needs, not the cleanliness of the code or models. There are long-term benefits to applying good software techniques on your projects - but those are secondary to solving people's problems. I have seen super-clean systems be dismal failures because the cleanliness ignored reality, or just plain took too long to develop. And I've seen dirty, ugly hacks save hundreds of thousands of dollars in fines, or generate millions in revenue. No, my lesson isn't to abandon good coding practices. The lesson is that code is a means to an end - not the end itself.
    Many people in IT are focused most acutely on making their jobs easier. Sadly for them, this is sometimes the worst possible thing for the business as a whole. I've seen (and occasionally even _been_ one of the...) programmers curse their jobs, how difficult it is and what a mess some other developers caused by decisions make X years ago. But many programmers miss the fact that quite often, those "bad" decisions may have saved the business at the time, or generated big revenues that pay your salary today, or validated a technology or approach that was unheard of at the time. In a quest for software perfection too often people forget how often pragmatism pays the bills and gets the job done.

        -Mike
  74. Domain Object Modeling[ Go to top ]

    People defining cross-system models always lack specific domain-specific knowledge in several key applications. Typically they are an expert in one or two domains, and the model favors those domains at the expense of others. Many times the universal model will treat one or two applications or domains so badly that the groups in charge of those areas will mutiny against the corporate standard.
    The problem there is not separating the roles of domain expert and domain modeler. The failure to do so is commonplace, and results in seriously suboptimal domain models, just as you notice.
  75. missing the point[ Go to top ]

    I understand what you're saying Henrique, but I don't agree with your conclusion in the paragraph:
    Now, if we had a defined data model for the whole company, even a very high level one, this would certainly be minimized. The main entities should be defined, its key informations and relationships, and make that a guide to every system developed in the company: that would have made integration so much easier and faster. Having an architectire group inside company`s IT and a well defined data model would have saved us lots of money and time indeed, I can`t stress that enough.
    For me it is quite obvious that any, I mean, ANY reference model for the key business entities would have helped us hugely. IMHO, it is plain obvious.
    I have seen what you are describing attempted, and at least where I have worked it has universally failed.
    Maybe because your environment is different from mine. But I am not sure about that.
    Some of the problems:
    • In the time it takes to do this level of definition and modelling across many disparate code bases and applications, the business and technology have moved significantly. If this sort of effort ever delivers, it often delivers even later and less capable than the dismal track record IT has in general.
    And what has the business model to do with the technology? It doesn't: you define a model, it can be applied anywhere, and that is one of the reasons the data model outlives the technology and the systems. What is done is done, this kind of model should be used in new projects and in specific cases when mantaining existing systems, but not to mandate a complete rewrite of existing systems in order to make them compliant to the model: this is not feasible in most cases. This way the company will have a clear definition of where, how and what the systems must be in the future, as they evolve.
  76. People defining cross-system models always lack specific domain-specific knowledge in several key applications. Typically they are an expert in one or two domains, and the model favors those domains at the expense of others. Many times the universal model will treat one or two applications or domains so badly that the groups in charge of those areas will mutiny against the corporate standard.
  77. Blame the architecture group, not the model. If you have the right group, with the right information, you should have a fairly good model in the end. They should not focus on specific systems or areas, but instead in user's general needs and mainly: company's needs and business. Then you will get a good model. The moment you start looking into the existing systems, your model is gone.
  78. People waste a great deal of time bemoaning the past and where it has lead them today. They bitch and curse and dream of better worlds, say over the water cooler "If only they had...". Get over it - study the past but don't become obsessed with it. Today and tomorrow are where the action's at.
  79. Creating a data model is not about looking into the past, but instead looking into the present and mostly the future. The past is broken, is caos. The future should be integrated and coherent, and thats the whole point of it.
  80. The clincher: what's good for programmers is not necessarily good for the business as a whole. The measure of IT success is its ability to serve business needs, not the cleanliness of the code or models. There are long-term benefits to applying good software techniques on your projects - but those are secondary to solving people's problems. I have seen super-clean systems be dismal failures because the cleanliness ignored reality, or just plain took too long to develop. And I've seen dirty, ugly hacks save hundreds of thousands of dollars in fines, or generate millions in revenue. No, my lesson isn't to abandon good coding practices. The lesson is that code is a means to an end - not the end itself.
  81. Again: defining the business data model for a company is not about technology. Forget programmers, forget clean code or systems. It is about getting everyone to call a bird a bird, so that everyone understands all the aspects and consequences of business processes and data. How each system will implemented it is another department. ;)
    Many people in IT are focused most acutely on making their jobs easier. Sadly for them, this is sometimes the worst possible thing for the business as a whole. I've seen (and occasionally even _been_ one of the...) programmers curse their jobs, how difficult it is and what a mess some other developers caused by decisions make X years ago. But many programmers miss the fact that quite often, those "bad" decisions may have saved the business at the time, or generated big revenues that pay your salary today, or validated a technology or approach that was unheard of at the time. In a quest for software perfection too often people forget how often pragmatism pays the bills and gets the job done.    -Mike
    IHMO, it is not about making things easier, but about saving millions and not wasting time later when systems must be integrated, and a complete view of information must be got from crossing data from existing systems.

    BTW: there is already an huge effort to define a common data modeling for telecom: TMFORUM's SID
    and a common business process mapping: TMFORUM's eTOM
    and a common Java API for Telecom Systems: OSS/J

    These are not to be cast in stone, but to serve as a guide when defining systems. Of course every system will have its particularities, as every telecom business has its particularities as well, but in general, they will all be based on the same general view of the business, and that is what will save us a lot of $ and time when we need to integrate systems.

    Regards,
    Henrique Steckelberg
  82. missing the point[ Go to top ]

    For me it is quite obvious that any, I mean, ANY reference model for the key business entities would have helped us hugely. IMHO, it is plain obvious.
    I agree - in theory. The question is, can it be reasonably done for complex industries?

    I usually see one of two things when such an attempt is made:

      - The attempt is too shallow. Basics in one or more areas are not addressed.
      - The attempt is too complex due to trying to integrate many conflicting requirements.

    The first is obvious - someone does the business equivalent of CS 101 homework and calls it the universal business model :-)

    The latter is increasingly common - the modelling team manages to capture tons of subtly. The problem is, where a group used to maybe deal with 10 classes with a small number of variables for a given use case, the genericized cross-enterprise model might balloon that by an order of magnitude. It is possible for a modelling team to find the sweet spot in between - but this is extraordinarily rare.
    Maybe because your environment is different from mine. But I am not sure about that.
    Could be. Finance has data standards, like FIX and SWIFT and the like, that define interoperability between companies. These took a long time to hammer out, and can be complex, but they're sufficient. But even with such a "sufficient" standard it doesn't come close to capturing everything a brokerage or bank needs in a number of application areas.
    And what has the business model to do with the technology? It doesn't: you
    define a model, it can be applied anywhere, and that is one of the reasons the data model outlives the technology and the systems. What is done is done, this kind of model should be used in new projects and in specific cases when mantaining existing systems, but not to mandate a complete rewrite of existing systems in order to make them compliant to the model: this is not feasible in most cases. This way the company will have a clear definition of where, how and what the systems must be in the future, as they evolve.
    Well, in financial services the two are very closely tied together, and the rate of change in the business is at least as fast as it is on the technology side. Part of the issue here is the sheer volume of data involved - financial people constantly have to "cheat" and take modelling short cuts to get the performance they need. Indeed, some systems can't even _use_ an RDBMS because such a system simply cannot keep up with the volume. Models often do outlive code - but this is far from universal because of throughput requirements.
    Blame the architecture group, not the model. If you have the right group, with the right information, you should have a fairly good model in the end. They should not focus on specific systems or areas, but instead in user's general needs and mainly: company's needs and business. Then you will get a good model. The moment you start looking into the existing systems, your model is gone.
    In many brokerages there is no "architecture group", nor is it really feasible - at some high level the data can be treated the same, but at deeper and very specific application levels it can be radically, radically different (again, throughput requirements alone can create a monster schism). What brokerage houses have to deal with are tremendous non-functional requirements plus very complex business rules, and "clean" modelling and methodology techniques just don't always seem to cut it.
    Again: defining the business data model for a company is not about technology. Forget programmers, forget clean code or systems. It is about getting everyone to call a bird a bird, so that everyone understands all the aspects and consequences of business processes and data. How each system will implemented it is another department. ;)
    The problem in finance is a bit more complex than calling a bird a bird.

    On one hand, someone may be trying to deal with 500 million birds in a day. Another application may be analyzing 50 billion birds over time to spot trends. Yet another is worried about deliverying one bird from point A to point B in 40 milliseconds or less. Some systems will look at a bird and the gross species and color will be sufficient. Others will go down methaphorically almost to the DNA level e.g. the level of detail required between different systems can differ by more than one order of magnitude.
    IHMO, it is not about making things easier, but about saving millions and not wasting time later when systems must be integrated, and a complete view of information must be got from crossing data from existing systems.
    I'm coming at this from a different perspective - efforts to get closer to a universal model often waste millions of dollars and never go anywhere useful. In many cases, such a pure "modelling" job attracts paper architects and Microsoft-Project addicts who excel at generating mega tons of paper and don't understand anything about actually implementing anything useful. To put it another way, attempts to create cross-application models often draw in the worst people to attempt such a beasty, and the end result is millions of dollars wasted with no results. And I literally meant what I said in an earlier post - I've seen such groups which were purposefully treated as retirement posts or "hint" positions in some companies (e.g. if you were transferred to such a group, it was a hint that you were not wanted).

    The difference here, I think, is that you are assuming that a grand modelling attempt will actually succeed. From what I've seen, most of the time they fail miserably.

        -Mike
  83. missing the point[ Go to top ]

    Mike, from what you have described, Financial systems are very much alike Telecom Systems, we too have to deal with millions of phone calls per hour, transfer phone calls in milliseconds, search for trends in milions of phone calls, interoperate with other companies for long distance calls and billing, etc. So actually both areas are very similar.

    So let's just settle this as it is, we both agree to disagree regarding the need to have an abstract view of things. Maybe both views are valid, because as far as I can tell, you have had success in your carreer, so your pragmatic view of things have indeed worked in what you do. My experience is a bit different, maybe because of some differences in our environments, for me it is clear that the lack of a general definition of the whats and hows of the company' business have impacted our integration efforts hugely.

    Regards,
    Henrique Steckelberg
  84. missing the point[ Go to top ]

    OK. Part of the differences may be the business drivers and business attitudes. While telecomm and financial services clearly both deeply rely on technology as part of their core businesses ( you can't make money if people can't make calls, or if trades don't go through), but they may be approached from radically different perspectives.

    Finance in particular is very interesting because, like programming, it's all in your head ;-). What I mean by that is that financial "instruments" and securities are all abstract entities that people have made up. A bond, a derivative, an equity stock - they're abstract concepts which define contracts between people. Basically, anything people can buy or sell, and which can be split into nice "chunks", can be "securitized" and traded. Pretty much all companies are organized along lines of what sort of thing is traded - equities (stocks) vs. fixed income (treasury bonds, municipal bonds, etc), for example - but even within those constraints there's alot of variation, and since it's all abstract it can change at a whim. The only reason it doesn't is the realities of the marketplace and implementing systems that can deal with changes. As an example, "settlement" (when the trade is actually made) is arbitrary. It used to be defined as "T+3", which means you actually swapped 3 days after you sealed the deal. The industry is moving towards T+0, meaning the swap is made the day of the trade. There's no physical laws or other concrete pieces defining this, it's just human agreements. Now that technology is fast enough to possibly do T+0, people want it - and it has an enormous impact on trading systems and back office systems.

    Since so much is arbitrary and based on agreements, not physical realities, change is a constant, and often whole departments can spring up to write code that takes advantage of specific agreements that in place at a given time. For example, mortgages (just like people's home mortgages) can be lumped together and "securitized". But mortgages are weird - the properties can differ, amounts can differ, and the credit worthiness of the people can differ, the payback rate of the mortgages can differ. Because of this, it's hard to securitize a mortgage and get an exact amount. There's natural variation. At one point, the securities laws said you could agree on a mortgage backed security (MBS) trade where the seller had to deliver within 2% (don't know if that's still true today). This agreement was made because of the lumpiness of MBS'.

    Some smart cookie saw this 2% variance and dollar signs burst in his head. Selling mortgages means you can combine many mortgages of "like type" to come up with a final tally that you sold (e.g. if you sold $10MM of a given type of mortgage, you could combine many mortgages to get to that $10MM). The smart cookie saw that if you owned alot of MBS, and were really smart about allocating what mortgages went into a sale, you could consistently sell, say, $10MM worth of mortgages but only deliver 98% of $10MM - you could theoretically save 2% automatically.

    So this smart cookie went out and hired programmers to create "mortgage allocators". These are programs that sort through your loan inventory and all of your MBS sales for the day and try to mix-and-match so that theoretically you're stiffing all of your buyers consistently by 2%. This is a pure computer science problem of matching M chunks of a given type into N bins, with a finite set of chunks and well-defined number of bins (and a hard problem too). The allocator program that could come closest to that 2% "wins" - in terms of essentially free profits.

    If that 2% were changed, let's say to 0.1%, then the math of the whole proposition would change - and keep in mind that "2%" is defined only by agreement within the industry, not any kind of natural law or other concrete reality. I believe over time that 2% has lessened, largely do to the efficiency of mortgage allocators on the street. So there was a "window" where brokerages could make a huge amount of money from a program, a program who's only "features" were that it had to be correct (match like mortgages with like bins), it had to be fast, and it had to come as close as possible to the allowable variance. While this window was open, the program that did the best mix and match and came closest to the 2% in the allowable compute time "won". Once the variance window closed (or became small enough as to be a wash), then it really doesn't matter very much anymore.

    This is what brokerage/banks are like. The "laws" are arbitrary, contain holes you can take advantage of, and they will change over time - so you'd better take advantage of them while you can. Imagine you pay two super-genius programmers $200K a year or something like that, and they create a program that lets you skim 2% off of literally billions of dollars worth of MBS trades. While the window's open, nobody's going to care how "clean" the allocator is. Having an unclean one can have a long-term disadvantage, but this is a risk that brokerages know how to manage.

         -Mike
  85. Credit derivatives[ Go to top ]

    Hi Mike

    I saw the length of your post and presumed that I could reply suggesting you be more concise. Instead I found it fascinating.

    My own side of the community is Repos and Swaps, particularly the initial and variation margininig of positions in these contracts. I enjoyed your post and I think I learnt something more about the collateralized debt obligations area.

    Thanks!
    Robin.
  86. To those that think that OODBMS are no longer relevant check out:


    http://www.objectstore.net/products/rtee/index.ssp


    Most companies that use OODBMS don't advertise it because it is a competitive advantage.

    Cheers

    Bart
  87. This is a bit misleading - I dare you to show a technology that is _not_ used somewhere on Wall Street.

    The question is how is it used, where, what are the plusses, and what are the minuses. When I last looked at OODBMS' they had several flaws, but the one that stood above all the others was the non-standard query mechanisms. In fact, quite often you had to write code to query the database. Where there was a query language, it was usually under powered and/or non-standard.

    A very close second was schema evolution - something that's well understood in the RDBMS space, and has always seemed to be a mess in OODBMS systems.

        -Mike
  88. Mike,
      How is RDBMS schema evolution any easier than OODBMS schema evolution. Educate me. When did you last work with an OODBMS? For apps that have complex data or simple data but comlex relationships, OODBMSs shine. Any type of app where navigating through an object graph where declarative mechanisms fall short, an OODBMS works well.

    If you are talking about simple data and simple relationships--stick with an RDBMS. I don't get very religious about this. I belive in using the right tool for the job. I just hate to see OODBMSs get a bad rep.

    Most times when there was pushback from a customer site on the use of an OODBMS, it was from the "database police" which consisted of RDBMS DBAs who where scared because they did not understand the OO world and the OODBMS capabilities.

    Hey, but don't take my word on it. The organization that hosts the Tour de France has been using Objectstore for years. From an eBizq article:

    <eBizQ>
    07/14/03--Followers of the Tour de France sporting extravaganza have an unquenchable thirst for real-time information on riders and results as the race unfolds from July 5- 27 over 2130 miles circumnavigating France. The official web site of the largest sporting event in the world typically receives 270 million visits at a rate of up to 35,000 hits per minute, in previous years peaking at 21 million hits during one day of the race. This year looks to be no different with the number of visits more than doubling over last year for the first stage of the race.

    Achieving fast response times and handling such a heavy workload is no task for an ordinary relational database. When the going gets complex and demanding, alternatives like object-oriented databases can be more suitable.

    The performance and scalability of ObjectStore is ideally suited to delivering large volumes of complex data in real-time to millions of viewers worldwide

    http://216.239.51.104/search?q=cache:0MoS-HxMfbcJ:www.ebizq.net/news/2185.html%3Fpp%3D1+tour+de+france+Objectstore&hl=en&start=2
    </eBizQ>

    ...and no, I don't work for Progress s/w the makers of Objectstore...

    I've got colleague that says good things also about Versant.

    Bart
  89. ObjectStore (now owned by Progress Software) is represented on the JDO 2.0 expert group, as is Versant (which also owns Poet).

    Yes, JDO is great for ORM. Yes, JDO is fantastic for manipulating objects that live in relational databases. But JDO also provides a standard programming and query capability that works well on the above ODBMS technology. I think this is a good thing for object database vendors - it lets them compete against relational vendors on performance and scalability, rather then the completeness and (in-)efficiency of their SQL implementations.

    Kind regards, Robin.
  90.   How is RDBMS schema evolution any easier than OODBMS schema evolution. Educate me. When did you last work with an OODBMS? For apps that have complex data or simple data but comlex relationships, OODBMSs shine.
    I last looked at them a few years ago. At the time, support for schema evolution was pretty terrible compared to what's been available for RDBMS'. If that's changed, I'd like to hear about it.

    Specifically - let's say I have several million "rows" of data (rows in quotes for the obvious reason) against schema version X. Schema version Y adds new tables, some new columns to existing tables, and moves some columns around. This can be simply managed in RDBMS' with SQL scripts. What tools do OODBMS vendors supply to automate this sort of scenario in the OODBMS world? The traditional problem here is that the OODBMS is intimately tied to the code and class structures, and (in the olden days at least) OODBMS vendors didn't have much at all to help when your object model (and hence your schema) changed. In an RDBMS world code and data are seperated, and are treated seperately.
     Any type of app where navigating through an object graph where declarative mechanisms fall short, an OODBMS works well.
    This is kind of an empty statement, and is biased towards "ease of use". RDBMS' shine because they are optimized towards efficient storage of large amounts of data, and running queries against that data and retrieving it. The relational model has been proven to scale for persisting data and getting it back in a large number of environments.

    It has not been proven that the same efficiency (in storage, and in speed) can be achieved with OODBMS', at least not from my own investigations. As I've said before in similar threads, the best design for in-memory application data access is not necessarily the best design for storing data and retrieving it later. Constructs that work well in memory - trees and lists and maps and the like - are not necessarily the best way of storing and keying persistent data for retrieval.
    [...stuff on ObjectStore...]
    That's a really, really nice marketing blurb, Bart. Ahem - "When the going gets complex and demanding, alternatives like object-oriented databases can be more suitable". Marketing drivel if I've ever heard it!

    I note that you haven't said anything on standard query mechanisms a la SQL. Any reason why?

    Out of curiosity, since you brought it up, who do you work for? You're certainly a very energetic OODBMS cheerleader, I'm curious why.

         -Mike
  91. Credit derivatives[ Go to top ]

    Cool beans. After I wrote the thing on MBS mortgage allocators I was actually thinking about Repo systems as well, but my memory of them was so dim that it would be useless to comment in any detail. Maybe you should post a bit on the intricacies in your own domain - I'd love to hear about how repos are done in the modern era (as opposed to my own dinosauric reminisces).

        -Mike
  92. Multiple Apps?[ Go to top ]

    But what if you don't have the only app that touches the data? What if two apps have radically different notions of what their object models should be but need to get at the same data? What if 8 such apps do?

    I've been thinking about this same problem. One possibility is that we have an extra layer which could act similar to remote procedures. Each Java application could call a "remote procedure" to get back a Collection of POJOs. Non-Java apps could call a web service, which would then forward a call to the "remote procedure" to get the data. In this way, the web service would act as a translator by returning POJOs as XML data structures. This idea would also place all the business logic and enterprise data structures on a central server an not burried within each application.
  93. missing the point[ Go to top ]

    On top of the above - in many applications the data also outlives the code by a wide margin. This isn't true in something like a daily trading system, but it is true for systems that store much more historical data. Once again - the data model is more important than any given application, doubly so in this scenario since the data will likely outlast the app!
    This is true for PetStore type applications too. One of internal applications in our company was implemented using Cold Fusion (It was not a good idea), next it was rewriten using java and EJB, now we have 100x more data and it performs better, but to make it work this application was rewriten using plain JDBC (ok, not so plain, home made framework), data is the same as it was in Cold Fusion version, data model changes, but there is no pain, RDBMS is desined for schema evoliution too.
    For all new applications I use simple things, helpers for JDBC and no of EJB versions can make it obsolete. It works in container in without it, it works for PetStore type applications ( but PHP probably is a better way ) and it works for "enterprice" and integrations with "legacy" systems without problems and it works better than buzzwords.
  94. missing the point[ Go to top ]

    Firstly, this is missing the point. What is needed is to persist objects, not have a data model for its own sake.
    This is just completely wrong. The data model lasts much longer than the application.
    Wrong, Mr King. The object model is more fundamental to the application than the underlying format used by a persistent object store.
    If you are going to post something as provocative as this, the least you can do is substantiate it.

    Here is something for you to think about: how long has SQL been around? And do you think that Java will still be around in the same amount of time?

    --
    Cedric
  95. Bravo![ Go to top ]

    "Here is something for you to think about: how long has SQL been around? And do you think that Java will still be around in the same amount of time?"

    Cedric, you are my hero!
  96. Bravo![ Go to top ]

    "Here is something for you to think about: how long has SQL been around? And do you think that Java will still be around in the same amount of time?"Cedric, you are my hero!
     SQL, UNIX, TCP/IP are more "stable" than JAVA, see jdk1.5 and EJB3, It is very clear. JAVA is popular on server just because it makes things (JDBC,IO/Sockets,JMS) simple on server ( LINUX / UNIX ) and lets windows programmers to code for UNIX. See COM, VB, MS DOS / windows, .NET and think about it too.
  97. missing the point[ Go to top ]

    >Wrong, Mr King. The object model is more fundamental to the application than the underlying format used by a persistent object store.
    Do not forget .NET :) If Big JAVA Players will accept this new EJB transparence, evrything you have coded using EJB become "obsolete legacy" and crap too and you will need new buzzwords and probably it will be .NET. Data model and data will be the same, but you will ned to drop "transparent objects", so I think data model is more fundamental.
  98. Well Done Linda and co.[ Go to top ]

    So the entity bean model was wrong, and yes it will be a painful transition to EJB 3 when it arrives. But at least the community has been listened to and innovative ideas from products like Spring, Hibernate etc. taken on board.

    Well done to all for turning the spec. around. Roll on rapid development, TDD and MDA.
  99. Ditto..[ Go to top ]

    I also would like to say congratulations to the EJB3.0 team on their chosen direction. In looking how far we have still to go. We often forget just how far we've come. I remember trying to write component based applications using C++ and CORBA. We've come a considerable way since then.

    When J2EE was first proposed it was clear that Java needed an enterprise component framework and quickly. Otherwise enterprise Java would have fractored into incompatible pieces. The technical short commings are well known, but J2EE has been a very successful standard. And as a standard it is almost ubiquitous (I haven't come across any job adverts asking for Hibernate, Spring etc. It is the J2EE on my CV that keeps me in work).

    So what if it isn't the best solution available? (maybe even at inception). Surely it is up to the industry to inovate, for new solutions to be tested through real world experience and finally for the best proven solutions to be standardised. I think this is exactly what has happened, and I can't see how it could work any other way!

    The speed at which the "standardisation" process is taking place is incredible in my view, given the diverse interests involved. And I am not sure whether the industry could obsorb change any faster.

    If they pull it off, it can only be seen as a success. And if by then there are better "best of bread" solutions, so much the better.
  100. Ditto.. (repost)[ Go to top ]

    The same post, but with apostrophes instead of raw unicode values. That's the last time I compose a post in Word!
    I also would like to say congratulations to the EJB3.0 team on their chosen direction.
    For the record I also welcome EJB 3.0's improvements to the Session components.
    We've come a considerable way since then.
    Yes, we have. Whilst we were doing that EJB was practically standing still. Does the JSR-220 specification lead realize the immense credibility gap that J2EE/EJB has to make up if EJB 3.0 is not to be largely irrelevant by the time stable and performant implementations become available?
    The technical short commings are well known, but J2EE has been a very successful standard.
    Interesting non sequitur there. I presume you're referring to the brand management of J2EE? That has been remarkably successful.
    <snip>It is the J2EE on my CV that keeps me in work <snip> So what if it isn't the best solution available?
    Ok, I'll admit it, the above is a deliberate but humorous and sadly appropriate mis-quote of your original post. I'm referring to the lack of acknowledgement of EJB's inherent problems over the last four years, particularly "don't use JDO because CMP/CMR has the J2EE blessing"...whereas this week they suddenly acknowledged that much of it was a mistake. Hopefully people on both sides of the equation will at least see the funny side of the mis-quote ;-)
    Surely it is up to the industry to inovate, for new solutions to be tested through real world experience and finally for the best proven solutions to be standardised. I think this is exactly what has happened, and I can't see how it could work any other way!
    Ok, let's get serious. The JCP works because standards are shown to be implementable (the RI) and are then implemented competitively by both open source and commercial parties. The Java community's big argument against .Net was for a long time that it was its own RI and spec as one. Now, because it suits some, this argument is to be reversed and you want to standardise a product?

    Yes the industry must innovate - that is why JDO 1.0 did not attempt to specify O/R mapping, but waited instead to see what the vendors would come up with. Yes, the best solutions to a problem should be standardized - that is why JDO has field-level interception.

    But just how does your comment relate to EJB 3.0? The persistence model is being completely reengineered. For the third time. This will be something new again (albeit with strong Hibernate roots).
    If they pull it off, it can only be seen as a success. And if by then there are better "best of bread" solutions, so much the better.
    I suspect that there already are better "best of breed" solutions in many core areas although they do not yet have the J2EE stamp. However, the stated direction of EJB 3.0 will effectively prevent new entrants from joining the game and stifle competition. It also unnecessarily propagates the lock-in to relational storage. So what we actually have is something resembling an oligarchy - just a handful of companies sharing a monopolistic position and acting in such a way as to maintain that position. Dion has realised this and in his blog he calls for backward compatibility of the 2.0/2.1 EJB specs to be optional.

    The JSR-220 team has largely done a good job. On the session bean side they could give more credit where credit is due to innovating projects that have influenced their ideas. On the persistence side I am really not convinced that they are acting in the interests of the wider Java community, given that a datastore-agnostic standard for transparent persistence is already in place. And if bytecode enhancement really was a problem for them, I'm sure that - with their added weight behind it - a JSR to implement persistence hooks into the JVM could have been approved in time for JDK 1.5, entirely obviating the need for PersistenceCapable and leap-frogging Java over the .Net CLR's capabilities.

    Despite the critical tone of my comments above, and as Cameron Purdy famously once said, Peace.
  101. think of poor EBay![ Go to top ]

    "because CMP/CMR has the J2EE blessing"...whereas this week they suddenly acknowledged that much of it was a mistake."

    "So what we actually have is something resembling an oligarchy - just a handful of companies sharing a monopolistic position and acting in such a way as to maintain that position."


    Exactly, exept that it should be named "con-business" for that is what it is.
    ARGGHH !! EBAY is down again tonight !!!!

    It is because of "products" like this that gives the whole IT industry a bad reputation and makes consulting a scam rather than a honorable profession.

    As far as ethics goes, "Intentional negligence" that leads to harm is no different than "Intent to harm".

    Regards
    Rolf Tollerud
  102. Peace (and respect)[ Go to top ]

    Good points and well put. I agree that insisting on backward compatibility is an obvious bar to new entrants. I also like your play on my intial words:

    "<snip>It is the J2EE on my CV that keeps me in work <snip> So what if it isn't the best solution available?"

    There is a lot of truth in this. The fact is that companies are more keen on standards than technology that may be percieved as "experimental" or "bleeding edge". Often the winning argument is a commercial not a technical one (Can I get the developers I need to do this stuff, and what happens if the single source vendor/open source project goes the way of the DoDo).

    So enterprises get what they ask for, and developers lament at missed opportunities. Ce la vie.

    About persistence. I thought they where "standardising" on Hibernate? Why re-invent? You mention persistence through the JVM, please tell us more. Conceptually this just sounds right and as you point out would be a great differentiator.
  103. Do you like object orientation, or are you more attracted to the mathematical properties of the relational data model? If the mathematical properties of the relational data model attract you, do mathematical properties in programming also attract you and if so why aren't you programming in non-imperative languages?
    As programmer I like OOP, transparence, innovations/experiments too, but "Enterprices" like valid data, transactions, declaratyve data access with tools like Excel to draw charts, cubes, reports. RDBMS solves most enterprice problems and mathematical properties never become obsolete like JDO x.x or EJB x.x
    Relational data model is very evil, I do not like it and I like transparent dreams too, but I live reality and I am not going to fight with it, I must work to live not to dream. JDO/EJB is more about dreams than about reality and it makes this garbage osolete before to publish any specification, is not it ? All real and good enterprice systems are implemented with "legacy" tools and "mathematical properties".
    Dreams are good for PetStore/homepage systems and experiments, relational data model and mathematical properties usefull to create real systems.
    Is it so hard to understand ?
  104. Well done[ Go to top ]

    Yes well done EJB designers you are now getting close to what EJB should have been in the first place. Please rest assured that there is no anger or bitterness whatsoever from those of us who have been forced to work with a frankly abominable API for the last 4 years.

    A serious comment though - arnt these meta tags effectively a new language element which is going break shed loads of existing tools ?. Why not make the tags doclet based ?.

    Paul C.
  105. Well done[ Go to top ]

    The meta tags are JSR 175, already implemented in JDK 1.5.
  106. "Please rest assured that there is no anger or bitterness whatsoever from those of us who have been forced to work with a frankly abominable API for the last 4 years"

    Paul, that was not nice. But in spite of you being sarcastic there really is a lot of people out there that is satisfied and patient and happy and I sometimes think that Java enterprise coders must be the most "round and easy Pollyanna people" in the whole world.

    Every time a new EJB standard is released, they say that now its working, now finally is all the promises to be delivered- but no. Next year the people are debunking their own work- then again, then again.

    What is all the fuss for? When you can develop with Spring, Velocity and Hibernate and scale to the sky? And probably outperform this "new" EJB by a factor of 28X?

    Ah, that is not a big secret; IBM, BEA and Oracle are trying with the help of Sun to protect "the Big Java Application Server" market that without EJB has no reason for existence. Soon .NET will have their IOC/Decency frameworks and then there will be no really architectural differences between developing in Java or .NET/Mono. No more need to buy Servers for $50000 (and after pay 11 dollar in consulting for every dollar spent on product according to IBMs own calculations).

    Then an era that is even stranger than the “Tulip Craze" in the 1700 comes to and end.

    Regards
    Rolf Tollerud
  107. Soon .NET will have their IOC/Decency frameworks and then there will be no really architectural differences between developing in Java or .NET/Mono.
    So you admit that the .NET is lacking those and you need them. :-P I don't miss anything from .NET. As for the hope that there will be no difference ... keep hoping
  108. how to lure oneself[ Go to top ]

    If you don't know Java==C#. They are the same. C# is a fork.
    So if you bash the one you bash the other.

    The main reason why the ISO standard C# Java fork with an Open Source implementation (Mono beta 1 out yesterday) probably will replace Sun-Java is this ridiculous hang ups about EJB, persistence and the like.

    One may ask why? I will venture an explanation.

    Unable to compete with Microsoft in terms of performance, productivity and just plain quality the Sun/Oracle/IBM camp in desperation come up with the idea that they are the computer scientists! That talks in terms like, "We Organize inter-service transfers according to use cases from known domain objects into a coarse-grained Composite! and are much more serious and competent then irresponsible Microsoft hackers".

    But now when we have the facit unfortunately that strategy has destroyed more money in a few years that 50 years of computing ever was able to do.

    But why do the Java guy still cling on to the Big App Servers and EJB desperately?

    Because that this belief of being superior is his very reason for living, if he should accept the truth he will fall apart and be an empty shell.

    Vanity rules the world.

    Regards
    Rolf Tollerud
  109. how to lure oneself[ Go to top ]

    Rolf, if you repeat that 1000 times (how much is it by now, 283?), maybe it will become truth...
  110. how to lure oneself[ Go to top ]

    Henrique: "Rolf, if you repeat that"
     
    I know, I know but why can not people admit that they was wrong? What about the customers?

    Going on the fifth year now with a technique that was doomed from the beginning. Forcing upon customers upon poor customers sub-par solutions. You can see in this thread that there are still many that believe in it.

    All in "a good cause". Incredible.


    Madre Mia
    Rolf Tollerud
  111. how to lure oneself[ Go to top ]

    Henrique: "Rolf, if you repeat that"  I know, I know but why can not people admit that they was wrong? What about the customers?Going on the fifth year now with a technique that was doomed from the beginning. Forcing upon customers upon poor customers sub-par solutions. You can see in this thread that there are still many that believe in it.All in "a good cause". Incredible.Madre MiaRolf Tollerud
    284 and counting...
  112. how to lure oneself[ Go to top ]

    I guess no apology will be forthcoming then?

    All the fun of correctly predict the future is taken away..

    Grr..
  113. how to lure oneself[ Go to top ]

    I guess no apology will be forthcoming then?All the fun of correctly predict the future is taken away..Grr..
    Only 715 to go. The guy is tireless... ;) (Sorry for the noise)
  114. <blockquoteWhen you can develop with Spring, Velocity and Hibernate and scale to the sky? And probably outperform this "new" EJB by a factor of 28X?Ah, that is not a big secret; IBM, BEA and Oracle are trying with the help of Sun to protect "the Big Java Application Server" market that without EJB has no reason for existence.Now, this is a rather loose statement. Often, in this forum, the APIs and the runtime requirements are being mixed up. EJB is a complete runtime platform specification. Spring, Velocity and Hibernate dont come with a complete runtime environment- Hibernate is just a persistence framework (granted, this may offer same what Entity beans does in a local-access context). And Spring is a layer on top of actual runtime (more like a development abstraction- that relies on platforms such as J2EE for the actual runtime). And Velocity is a layer similar to JSPs. This is the state today. Theoretically, ofcourse, one could take Spring and provide a complete reliable robust runtime. But why?

    J2EE is well accepted for what it is today (lot of money being made on this platform! And noone puts their money in for nothing these days!!). The exhaustive functionality, reliability an drobustness in the patform and the standards compliance are key reasons for this. Sure, there may be deficiencies in the APIs and the usability of the standards. But that doesnt take away the utility from todays J2EE App Servers.

    IMHO refining the APIs/usability and leaving the rest of the runtime functionality as is, is the need now rather than suggest scrapping J2EE/EJB entirely (and start using (standardise?) Spring+Velocity=Hibernate) !
     
    Cheers,
    Ramesh
  115. When you can develop with Spring, Velocity and Hibernate and scale to the sky? And probably outperform this "new" EJB by a factor of 28X?Ah, that is not a big secret; IBM, BEA and Oracle are trying with the help of Sun to protect "the Big Java Application Server" market that without EJB has no reason for existence.
    This was a quote.
    ..rather than suggest scrapping J2EE/EJB entirely (and start using (standardise?) Spring+Velocity=Hibernate)
    Meant to say "Spring+Velocity+Hibernate"
  116. Well done[ Go to top ]

    A serious comment though - arnt these meta tags effectively a new language element which is going break shed loads of existing tools ?. Why not make the tags doclet based ?.Paul C.
    How would this break "shed loads" of existing tools considering that the syntax is new and only JDK 1.5 (still in beta) supports it?

    Regardless, I think that JSR 175 brings nothing but improvements over the error-prone untyped and non-standard Javadoc-based annotations we've been using so far.

    --
    Cedric
  117. That just looks so good I want to pinch myself to see I'm not dreaming.

    Could this be true? Could EJB actually start becoming useful, that is you pay less than you get actually get? (Well, Microsoft managed to do it with the .NET enterprise component model, so why shouldn't Java?)

    I'm still a bit sceptical though, I'm sure IBM or BEA might step in and mess the whole thing up. Just remember the whole PortableRemoteObject.narrow malarchey, or why not the CORBA interop hairball. There's just too much trauma in the EJB history. For example, we'll see how those nice clean samples look like once they've added backwards-compatability.
  118. Interesting that *all* of the replies have focussed on persistence - one small part of the EJB spec.

    I can't help feeling, however, the this spec will be an irrelevance by the time it comes to fruition. Personally I feel that the likes of Spring, Hibernate and JDO are making an end-run around EJB. The days of a monolithic framework like this are numbered. At least I hope so.

    P.S. Am I the only one that thinks that attribute-based programming is a bit of a heath-robinson approach to cross-cutting concerns? I think that the next 10 or 20 years in computer science (vis-a-vis programming models) will be quite interesting after all. There simply *has* to be a better way.
  119. Interesting that *all* of the replies have focussed on persistence - one small part of the EJB spec.
    Well, persistence certainly isn't a small part of EJB spec page-wise. If you check EJB 2.1 specification you can see that entity beans alone take around 200 pages, and then transaction support (which you'll agree is mostly related to persistence) takes another 40 pages. This is almost half of the whole spec (which is 600 and some pages) if you cut out the fluff (index, FAQ, some appendices). Add to that that most people say and write "EJB" insted of "entity beans"...

    Now if you weight them by their usefullness, then you are quite right..
  120. The EJB 3.0 spec appears to be shunning JDO on the grounds of its now-optioinal bytecode enhancement and it's persistence-by-reachability, both of which actually make JDO the incredibly efficient and powerful technology it is.

    Perhaps, then, my next "agenda" should be to get JDO 2.0 added into J2EE 1.5 alongside EJB 3.0, so that developers constrained by the "only use J2EE technologies" mantra so often peddled by the various stakeholders can still have the choice....
  121. Interesting that *all* of the replies have focussed on persistence - one small part of the EJB spec.
    Well, persistence certainly isn't a small part of EJB spec page-wise. If you check EJB 2.1 specification you can see that entity beans alone take around 200 pages, and then transaction support (which you'll agree is mostly related to persistence) takes another 40 pages. This is almost half of the whole spec (which is 600 and some pages) if you cut out the fluff (index, FAQ, some appendices). Add to that that most people say and write "EJB" insted of "entity beans"...Now if you weight them by their usefullness, then you are quite right..
    I actually meant by functionality. We have:

    1. Session beans.
    2. Message driven beans.
    3. Timer service.
    4. Transactions.
    5. Security.
    6. Remotability.

    And finally:

    7. Persistence.

    (sorry if I missed any).

    And, of course, transactionality is of use for more than just persistence. So while I agree that a large proportion of the specification is devoted to persistence I am simply pointing out that that is disproportionate to the fraction of functionality that it covers.
  122. Persistence isn't the only fruit[ Go to top ]

    Paul enumerates the services and bean-types which are part of EJB (within its J2EE context):
    1. Session beans.
    2. Message driven beans.
    3. Timer service.
    4. Transactions.
    5. Security.
    6. Remotability.

    And finally:

    7. Persistence.
    Paul makes a good point &#8211; J2EE/EJB has a lot to offer. As JSR-220 moves towards reducing framework dependencies through embracing dependency injection the whole package will become more acceptable to the community.

    However, everyone knows that Entity Beans have been broken for a long time - Gavin's frustration with CMP was even cited as the origin of Hibernate. The JSR-220 folks have ripped up the Entity Bean spec as far as new developments are concerned, keeping backward support for the old style beans.

    JSR-220 would finish much more quickly and be in production much more quickly if they deprecated (i.e. made optional) the entire Entity Bean concept and adopted JDO 2.0 as the persistence chapter.

    Now, is my suggestion different from last year's calls to "make Hibernate the standard"? Yes it is very different. JDO 1.0.1 is the JCP-ratified standard for transparent persistence, and JSR-220 is taking huge liberties with their original mandate to "simplify EJB" by branching off into the definition of a brand new persistence technology which directly competes with that standard.

    I would have thought that, if EJB really believes it can compete with the "lightweight containers", that they would be thrilled to expedite the completion of their work by taking such an approach.

    We have yet to hear justifications of the choice not to align with JDO.

    Kind regards, Robin.
  123. Go ahead![ Go to top ]

    Stop your political conflicts! We have projects to develop...

    How long must we wait for the first complete implementation of the EJB 3.0?

    and while waiting, what kind of persistence solutions should we choose?
    JDBC (why not!), JDO (the standard), Hibernate (the Challenger) or CMP 2.0(now obsolete).

    What is your view Marc Fleury?

    Regards

    Dawn
  124. Relax[ Go to top ]

    After reading through the comments and the backgrouns, then I must
    admit that I do not understand how people make their conclusions.

    This is very unlikely to be the end of EJB's. Or of BEA. Or of Hibernate.

    I do not even consider this a big change. There is not much new functionality.
    It is mostly about easier ways of doing the same.

    They are changing stuff people write today to be generated by the container.

    Does not sound like the end of containers to me to move more functionality
    to them.

    I like the model: we only write the business logic and the container
    does the rest.

    It will be easier to develop J2EE applications. Which is good.

    The only looser I can see in this game is tools that exist to
    do the same thing. Xdoclet. Enterprise Editions of Java IDE'e. Borland
    may be the biggest looser.

    IBM, BEA and JBoss should not have a problem.

    And it will likely help J2EE in teh stryggle against Microsoft, because
    the typical argument against J2EE is complexity leadng to high cost.

    I think that we have another example of this simple model than
    the ones mentionsed by others: Apache Axis and .jws files.

    But I would also like to predict that the specs will be 50-100%
    more pages than the current.

    It is just that way !

    Backwards compatibility takes its toll.

    And then there are an almost unstoppable trend to make things
    more complex over time.

    So I think that:
      - it will be more work to implement J2EE containers
      - more stuff to learn to really understand J2EE
    but also as stated previously:
      - much less stuff to learn for the developer that just needs
        to implement something
    and the last one is what counts most for costs of J2EE projects.
  125. Gavin on EJB3[ Go to top ]

    Gavin just posted an interesting article summarizing his views on EJB3 and JDO.
  126. I've read Gavin's summary of the case against JDO. For someone who remains party to the JDO 2.0 expert group discussions I'm impressed with the dis-information he is propagating about detachment in JDO 2.0.

    In response to the recent JDO vs. IBM, Oracle, BEA thread I had cause to evaluate Service Data Objects in relation to the detachment component of JDO 2.0. The commentary I wrote 2 days ago was finally published yesterday, and is on-line at http://www.jdocentral.com/JDO_Commentary_RobinRoos_6.html.

    This commentary goes into significant conceptual (as opposed to implementation-level) detail about detachment in JDO 2.0. You'll notice that Gavin's expressed concerns about the inability of JDO to diffrentiate between null and not-loaded references is incorrect.

    Kind regards, Robin.
  127. error[ Go to top ]

    I stand corrected. I was not aware that this problem had been fixed. I assure you that this was not intentional "dis-information".

    Nevertheless, I am convinced that a proxy-based solution is superior to solutions based on field interception, and of the correctness of my other criticisms.
  128. error[ Go to top ]

    I stand corrected. I was not aware that this problem had been fixed. I assure you that this was not intentional "dis-information". Nevertheless, I am convinced that a proxy-based solution is superior to solutions based on field interception, and of the correctness of my other criticisms.
    I am lost! <br><br>

    Is this Bill or Gavin?
  129. Appreciate a reply ![ Go to top ]

    Ok...
    Let me apply the "Idiot's rule" here..
    If everyone knew that it makes sense to keep stuff simple, then why didn't they think about this in EJB 1.0 spec??.
    I found the whole concept of EJB ridiculous right since the beginning.
    I am spoilt by the neatness and simple-ness Hibernate.
  130. Rebranding EJB'S....[ Go to top ]

    I think going ahead of EJB3.0 there should be rebranding of EJB's

    may be they should be called EJO

    Enterprise Java Objects.

    EJB is too controvercial a term

    cheers
  131. Rebranding EJB'S....[ Go to top ]

    I think going ahead of EJB3.0 there should be rebranding of EJB'smay be they should be called EJOEnterprise Java Objects.EJB is too controvercial a term cheers
    Am I right to presume that an EJO is more expensive than a POJO, and that for the extra cost an application using an EJO has the privilege of being tied into an architecture reliant upon the presence of both an application server and a relational database?
  132. new version of old story[ Go to top ]

    home interfaces, deployment descriptors, SessionBean interfaces. Welcome annotations, transparent object-relational persistence, alternative non-relational stores, fine-grained cascading options, Projections,graph oriented data models, persistence-by-reachability and deletion-by-unreachability, detachment in JDO, etc, etc..bla bla bla,. and in spite of all, nothing comes out of it.
    _____________________________________________________________

    On the other hand, in the MS world, all talk is about Avalon/XAML and Indigo distributed web services with Rich Clients.

    In the Java world, you have two choices,

    1) Implement Avalon/XAML and ship it with Linux and Mono or,
    2) Come up with an own, competitive stack.

    you will need,

    Canvas graphics, persistent objects (Tk Canvas, Gnome Canvas)
    With AA/vector-based graphics (Gnome AA Canvas)
    With animation (Nautilus Andy-branch Canvas items)
    With Vector graphics (Gnome Print, librsvg)
    With A 2D graphics model (PDF, Libart, Cairo)
    With Web-like-Deployment security (SecureTcl, Tcl Plugin, Java)
    _____________________________________________________________

    But if the J2EE community is going to talk about transparent persistence ad infinitium, like hens going in all directions at the same time while the rest of the world is passing you by, then,

    All that is going to happen is a rerun of the old story "PC vs Mainframes".

    Regards
    Rolf Tollerud
  133. I thought CMP2 was easy[ Go to top ]

    I'm not a rocket scientist, but I was able to sit down with the domain
    experts and come up with a domain model using just attributes,
    directional relationships and multiplicity. Just aggregation was used
    with composition added later with cascade delete. I didn't have to look up any JNDI, I just had to work with creates and getters and setters. The container
    took care of all that stuff. No JDBC connections, no JDBC, no datasource lookup. BTW, Sun was using JDO for it's underlying persistence mechanism, not JDBC. EJB2.1 will add web service interfaces to the
    services layer soon. Now with EJB 3.0 changing all that completely I wonder
    if folks so vehement against EJB have implemented or worked with in any way
    CMP2.0. Is this new persistence model, more to appease the masses, making
    J2EE a sort of Alice's Restaurant. I mean it's a code rewrite to go to it.
    So much for sticking with the standards to insulate yourself against change.
    I, of course, would like simplified deployment descriptors. Also, being
    able to pass the Entities themselves without the need for seperate DatatransferObjects or value objects. It all sounds easier but with CMR
    gone, one has to wonder. What happens to the EJB2.1 web service interface
    in EJB3.0? What's the best practice? EJB2.1 or EJB3.0?
  134. I thought CMP2 was easy[ Go to top ]

    Hi Andy,

    Thanks for being willing to speak up for the CMP 2.x model. There are actually many people who like it and use it productively but they tend to be silent in forums like this. (My sense is that a person's view about how easy it is to use the 2.x API is that it depends a lot on what tools they are using.)

    Just to make sure you know, the EJB 2.x API (including CMP) will be fully supported by EJB 3.0. In fact, many of the 3.0 enhancements will be applicable to both the 2.x and 3.0 APIs. So all the power of the new EJB QL will be added to your existing 2.x API and you'll still have CMR there too. If you are happy now, then there's no need to change API's and you /are/ insulated from change.

    Best
    Scott
  135. Overall the spec is great. But I cannot see why Stateless session
    beans could not be Web service enabled, in a similar fashion to
    how .NET works. Otherwise, I imagine to expose Stateful beans
    as Web services you would need to rewrite them first as Stateless
    beans, and then expose them as Web services (which in many instances
    is far from ideal).