Complex Object Model Development Debated at SDWest

Discussions

News: Complex Object Model Development Debated at SDWest

  1. Yesterday the SDWest conference hosted a keynote panel entitled "Entity Beans do not support complex object models - what to do" which was covered in a recent infoworld article.

    Read Java object models debated.

    All of the panelists agreed that initial notion of distributed persisent components were a bad idea that we should have avoided, especially after the experiments with this concept during the corba days.

    A lot of time was spent talking about JDO and what needs to be done. There was unanimous agreement inorder for JDO to make it, they need to specify the mapping layer. That is, we need standard way to to perform the task of specifying how the objects will map to tables in a DB. Currently each JDO product has proprietary ways to specify the mappings.

    Threaded Messages (64)

  2. Hibernate[ Go to top ]

    As one who defends EJBs (including entity beans) on this list, I have to say that I am working on a project implementing the Common Warehouse Metamodel and am using Hibernate for persistence. This is the first time that I have used it and I am quite impressed. While I was having a modicum of success with the entity bean approach on JBoss, Hibernate has greatly eased the implementation of the CWM object model. I look forward to using it more in the future!

    Cheers
    Ray
  3. I'm not convinced that JDO needs to specify the mapping semantics. In fact I think that might be a poor idea, considering the diversity of datastores that JDO is designed to support. I'm worried that some sort of over-generalized mapping specification will be produced, and in order to use it you wind up having to stuff it with vendor extensions anyway.I think one reason not to make mapping part of the spec is that the spec risks bloating itself with XML descriptors, just like EJB. Look, learning a proprietary mapping file format is not hard, and until I see how an LDAP mapping can share the same concise, readable semantics as an O/R mapping, I'll be fine with a simple proprietary mapping.
  4. Standardize JDO Mapping[ Go to top ]

    I initially thought that it would be great if JDO standardised mapping however I am starting to agree with Geoff on this one. Redoing your mapping when changing vendors is not hard and does not take long. Most JDO vendors provide GUI tools to help. All you have to do is change one XML file. The most important thing is that your working application code does not need to change.

    However I would like to see managed inverse relationships (one-to-many, many-to-many) in the spec. Nearly all vendors support these already due to customer demand.

    Cheers
    David
    JDO Genie - High Performance JDO for JDBC
  5. Standardize JDO Mapping[ Go to top ]

    As a JDO user I have not been concerned about the lack of standardised mapping. I have used the vendor mapping extensions of JDOGenie and have not felt 'tied in' to it.

    In the unlikely event I did choose to move from JDOGenie then I would need to choose a vendor that supported the extensions I require. Much more time would go into the change decision and product selection process than the changing of the mapping configuration file. I wouldn't pick a replacement product that didn't have a simple to use mapping GUI either, sure I can edit XML files but why bother.

    For my project JDO is great, I suspect that the majority of java applications would do great with JDO too. If not then you have plenty of choice from the likes of Cocobase and the other plethora of mapping engines/techniques.

    Dan
  6. While it's true that the specification of EJB doesn't support complex Entity Beans very well, CocoBase Enterprise O/R Mapping extensions for EJB does support this very well and quite easily. Not only does CocoBase support Object Relationships, but it also supports Dependent Objects (Coarse Grained Entity Beans) out of the box. This support is based on the EJB 2.0 draft specification for dependent object support which was withdrawn in the final 2.0 spec due to some app server vendors objections. It's a very difficult thing to implement correctly, but much easier when based on an existing and mature O/R system such as CocoBase.

    Also if you look at the CocoBase Distributed Transparent Persistence implementation which is based on Session Bean technology, you might wonder why JDO or Entity Beans even exist... Using this technology you can persist an existing Object model, take advantage of app server Transaction management and Connection pooling out of the box as well as use inheritance, 1-1, 1-m & m-m relationship management as well as distributed lazy relationship loading! And for those who are concerned about supposed 'proprietary' apis, the actual implemention of the Transparent Persistence facilities is shipped as source with the product. Developers can even repackage and rename the APIs to support their own programming conventions...

    I agree that it's important to address this at a specification level, but there are already commercial products (CocoBase) that handle these issues with ease and elegance. If customers are suffering through this, it's completely unnecessary when mature and robust tools have existed for years to address this. CocoBase has supported dependent objects with Entity Beans for almost 3 years! I'm often surprised how much of an issue is made of this when it is so easily solved by our products, as many customers have already found...

    One of our engineers wrote a great article on the topic of coarse grained entity beans in the Websphere Developer's Journal last month. The URL is
    http://www.sys-con.com/websphere/articlea.cfm?id=232

    My recommendation for anyone who needs this level of persistence, is to go download the free 30 day evaluation of CocoBase and in particular grab one of the Distributed Transparent Persistence tutorials and have a look at Chapter 6 of the Programmers Guide on CBFacade (our Transparent Persistence APIs) and in particular the CBFacadeRemote. This design pattern in our testing greatly outperforms Entity Beans and it offers a much richer and easier to use Object Programming paradigm while fully complying and supporting the J2EE Transaction and connection management facilities.

    Just my $.02

    Ward Mullins
    CTO
    THOUGHT Inc.
    http://www.thoughtinc.com
  7. Great topic. Thanks Floyd.
    Steve
  8. EJB does not support Complex Object Model ??
    ---------------------------------------------

    This is more of an architectural and performance related issue. It is possible to model the persistence tier of an application down to any complexity – the problem lies on whether this meets the application meets the performance requirements, which as Marc Fleury rightly pointed out, hinges on two factors – serialization due to the database connectivity and serialization for remote access of entity beans.

    What I would like to point out is that the database connectivity problems aka serialization can be resolved through a robust caching mechanism that cuts down ion the number of database calls drastically, which is what FrontierSuite for J2EE (Our EJB CMP persistence manager) does. And as a result of this, we have constantly maintained that we provide EJB CMP at orders of magnitude above a normal EJB CMP. And to cover the major hole in the EJB model, we also support inheritance as an extension.

    The serialization for remote access can be avoided through design patterns – the session façade is the best solution for this. Through this pattern, it is possible to model the persistence model using Entity beans at any level of granularity – and using Session beans to control the granularity of the data that s passed around the network.

    The point is that it is very much possible to model Entity beans at any level of granularity –


    Inheritances
    Associations (1 –1, 1- n, n-n), uni/bi directional
    Aggregations(1 –1, 1- n, n-n), uni/bi directional
    Relations as Objects
    Compound Attributes (user defined persistent entities, aka dependent objects in the EJB 2.0 draft specification)
    …

    FrontierSuite for J2EE can do all this.

    More…
      
    regards

    Rajesh
    (S Rajesh Babu)
    ObjectFrontier Inc
    www.objectfrontier.com

    FrontierSuite for J2EE – Faster Development and Enhanced Performance
  9. JDO is lacking on a Mapping Specification
    -----------------------------------------

    JDO does not specify any mapping and neither should it do at this point. Otherwise it will be a case of trying to fit everyone’s capabilities and fitting none – and we will end up with a lot of extensions. The need here is for robust development tools that allow mapping, without getting into the complexities of the DTDs etc. And realizing this, we provide a robust mapping environment where objects can be mapped to schemas. FrontierSuite for JDO provides the following approaches:

    a. Create a new schema for a model
    b. Create a new model for an existing schema
    c. Map a model to an existing schema

    All this is possible through a user friendly environment – we have customers who have reverse engineered close to 500 tables into an object model in a couple of hours, including the fine tuning. (And the same capabilities are available for the Entity beans mapping also)

    While there may be some badly designed ’20 year old schemas’ around, it is possible to map almost any schema to an object model with the right set of tools. And our tools are one of them.

    There has been a lot of debate and acrimony on JDO, but as Craig Russel and others have said, JDO is a fine persistence standard and will be very quickly adopted, one the politics are out of the way, inspite of the potshots taken by some from the side lines. And this opinion is based on the very healthy feedback from our considerable JDO customer base.

    regards

    Rajesh
    (S Rajesh Babu)
    ObjectFrontier Inc
    www.objectfrontier.com

    FrontierSuite for JDO - Enterprise Solution for JDO Applications
  10. With enough hard work and complexity it is possible to work around the problems with any technology. The advantage of JDO is its specific focus. It is designed for persisting fine grained object graphs transparently and efficiently. It does not try to do anything else. This makes JDO solutions much simpler and leads to much improved productivity and performance.

    Cheers
    David
    JDO Genie - High Performance JDO for JDBC
  11. Concur with David. There has been too much FUD on JDO by parties intersted in sabotaging the spec, and the noise caused by this may be one of the reasons why people are still not aware of the utility of the standard. It is time this FUD is countered.

    The bottom line is JDO is good and it solves a lot of the persistence problems. And where it misses out, extensions are available (we too are an O/R Mapping product company that has been around for years with a wide customer base) and these extensions have a very good chance of making it back to the specification.

    regards

    Rajesh
    (S Rajesh Babu)
    ObjectFrontier Inc
    www.objectfrontier.com

    FrontierSuite for JDO - Enterprise Solution for JDO Applications
  12. So lemme get this straight... The current spec is good because it keeps your company (and others) in the business of providing alternatives to it? Don't get me wrong, but that sounds like one big conflict of interests to me, nothing more. How about, "if the spec and implementation actually were that good, we wouldn't need your product to start with." This is not flame bait, it's just the yardstick for determining when a standard actually works.

    When you end up using more work-arounds and/or proprietary extensions than the standard mechanisms themselves, in my book it's a sign that the standard could use some more work. It's that simple.

    As an unrelated example, take Swing as an example. It's an API which actually works as it is. You don't need to pay for third party tools to make your Swing applications actually work, you don't need proprietary extensions, etc. It just works as intended. If you also understand how it works (as opposed to hiring the first guy off the street as a Swing programmer), contrary to popular belief, it actually doesn't have major performance issues either. You can write very large and complex applications with Swing, and we actually did just that at one point.

    Well, _that_ is when a standard is complete for me. When it just does its job as it is, and there is no need to buy third party extensions for it. That's all I'm trying to say.

    Call it FUD if it makes you feel any better.
  13. Swing to JDO - how?[ Go to top ]

    Swing is a standard. But it is a standard in which there is only one implementation.

    This cannot be directly compared to JDO which is a standard to which vendors create compliant implementations.

    JDO currently standardizes how persistent objects behave and how they can be managed by applications that use them (primarily through the PersistenceManager and JDOHelper interfaces).

    For now we have not standardized the underlying mapping to a relational schema. To do so would have been to constrain the creativity of the vendors in addressing the inherent complexity. But no matter what (compliant) product you use, or how you represent the mapping to a Relational schema, the persistent objects still behave in an entirely predictable and standard way. That is what the spec is there for. Applications written against JDO are portable across different (compliant) implementations.

    It is likely that work will be undertaken in order to increase the commonality between persistence mappings of the various vendors. This would make migration easier. It's too early to claim that the mapping would actually be standardized. Lots of clients ask for it to be done, but there is an argument that the constraint of flexibility would not justify the portability of XML files.

    Kind regards, Robin.
  14. <Cristian Golumbovici>
    So lemme get this straight... The current spec is good because it keeps your company (and others) in the business of providing alternatives to it? Don't get me wrong, but that sounds like one big conflict of interests to me, nothing more. How about, "if the spec and implementation actually were that good, we wouldn't need your product to start with." This is not flame bait, it's just the yardstick for determining when a standard actually works.
    <Cristian Golumbovici>

    The spec is good it provides a standard way of persisting a java object model. And it does provide us the opportunity to provide a good implementation that you can choose and work with (the best of the breed solution approach). And the reason why we do not have a single implementation from Sun (aka Swing) is that there was no attempt to do this, the reference implementation is meant to provide only feel for the spec. And though there is only one implementation for Swing, we do find other implementations for small components from some vendors and open source as well.
    So judging the efficacy of a spec based on whether there is just one impl is off the mark - I would say. Using the same yardstick, in your opinion, J2EE is a failure - afterall there are more that 100 J2EE app servers around, all implementing the same spec.

    <Cristian Golumbovici>
    Call it FUD if it makes you feel any better.
    <Cristian Golumbovici>

    What I had meant by FUD was the negative and misplaced criticism of the was JDO standard was drafted by some people. Some of the major objects were on 'byte code enhancement, which is not manadatory by the way, even source code enhancement is possible, the lack of mapping guidelines ....

    <Cristian Golumbovici>
    This is not flame bait
    <Cristian Golumbovici>

    No probs at all.

    regards
    Rajesh
    (S Rajesh Babu)
    ObjectFrontier Inc
    www.ObjectFrontier.com

    FrontierSuite for JDO - Enterprise Solution for JDO Applications
  15. EJB Inheritance[ Go to top ]

    In many ways I actually prefer EJB to JDO, principally because of the fact that finders are explicit, as are creations etc, which can help to drive the physical design of the database (i.e. you can see there is a finder which works based on a particular field. It's not hard to see that indexing that field would increase the finder method's performance). It also has some advantages when talking to legacy backends etc, as it's not coupled to JDBC (although I can see that potentially the same could apply to JDO).

    The main problem I have with EJB is that at present (without extensions), it's impossible to use inheritance between entities. I assume the reason for this is that EJBs were originally concieved as a layer on top of a database, rather than being a true OO development environment.

    I wholly support Rajesh's view that Entities should support inheritance, as otherwise people just invent complex 'workarounds' to make the same thing happen. Here comes my real question: Does anyone know if there is an intention to make inheritance amoungst Entity Beans part of the 'real' EJB spec?


    Paul
  16. EJB Inheritance[ Go to top ]

    I agree about the need for inheritance in the EJB spec. As I mentioned earlier, I am working on an implementation of the Common Warehouse Metamodel. The CWM makes extensive use of inheritance and while I was able to employ various well-known work-arounds it was starting to become quite cumbersome to work with. Hibernate has solved the problem quite nicely, and it integrates well with JBoss, which is another plus for me.


    Cheers
    Ray
  17. S.Rajesh Babu ..the name sounds familiar to me. Were you at wipro, bangalore during 1998-2000 ?
  18. <Quote>
    S.Rajesh Babu ..the name sounds familiar to me. Were you at wipro, bangalore during 1998-2000 ? </Quote>.

    No. Maybe that was someone else with the same name.

    regards

    Rajesh
    (S Rajesh Babu)
    ObjectFrontier Inc
    www.ObjectFrontier.com

    FrontierSuite for JDO - Enterprise Solution for JDO Applications
  19. People don't get OO[ Go to top ]

    What is it that people don't get about OO?
  20. Collaboration Patterns[ Go to top ]

    I read the article and agree with Kyle Browne's comments:

    "..he has found that people really have not done object-oriented analysis or object-oriented design."

    In my opinion designers have traditionally found themselves constrained by the relational schema they will use, and the general complexities of object persistence (impedence mismatch, etc). The result is that they avoid complex structural relationships in their designs (inheritance/implementation hierarchies) and build systems based on value objects.

    Additionally, the concept of a "service-based architecture" has been misused as an excuse for separating behaviour into objects in the "service tier" from the corresponding state in the "data tier".

    The fundamental tennet of OO is "Objects have State and Behaviour", but this seems to have been forgotten in the rush to implement the latest J2EE pattern.... The result is applications that lack a true domain object model, or have one that has been so castrated as to be ineffective. Reuse is compromised, as applications tend towards an effectively "procedural" architecture.

    Here at Ogilvie Partners we design domain models based on Collaboration Patterns. These patterns use the fact that the relationship between two collaborating objects dictates a huge amount of the structure of the classes on either side of the collaboration.

    I describe the classes and identify the collaboration patterns between them in an XML document. This is parsed by a tool that understands collaboration pattern implementations, and generates a comprehensive and extensible domain object model in source code form, comprised of the appropriate Class and Interface definitions.

    Using this technique I have created models for non-trivial domains in a day. Naturally the model evolves over time, but this is easily accommodated.

    Coincidentally the generation process also outputs a JDO persistence descriptor for the package, and I use JDO to persist these comprehensive OO models. The underlying concept is not tied to JDO persistence, but right now JDO represents the Standard (capital S) way to persist such models.

    I'm currently refining this technology prior to a Beta release - anyone care to try it out?

    Kind regards, Robin.
    Robin@OgilviePartners.com
  21. For complex/large apps, O/R does not fit.
    (O/R: OMG, JDO, EJB, Castor, Hibernate, OJB, etc).
    Start reading here:
    http://www.agiledata.org/essays/impedanceMismatch.html
    In practice, on a complex form, it ends up looking relational.
    Ex: A tax form, has a list of income and a list of expenses. OMG has done a huge disservice by "advertising" that one should not need to know SQL.

    A better fit is a SQL data layer.
    (Ex: JDBC ResultSet, JDBC RowSet, Jakarta Commons-SQL, iBatis.com DB layer).
    It's not secret that majority of large apps used handcrafted JDBC for complex joins, performance and simplicity. Tried and true. Look at ADO.net.
    Works great with nested beans concept, ex:
    http://jakarta.apache.org/struts/struts-nested.html
    You have 3 beans:
    One is a list (rows) of income.
    One is a list (rows) of expense.
    And a 3rd form bean that contains the above 2 beans as a property.
    On retrieve { bean1.populate(); bean2.populate();}
    On save { bean1.save(); bean2.save();}
    This does not mean you don't do OO!
    OO is for productivity via Reuse, not OO for OO sake. You can create a base DAO helper for your form beans, etc.

    O/R works in development (or small apps that can fit on a laptop), but not in complex/large production. In any case, if you are writing a db app, someone on the team should know SQL. Even before Java/Web, DBA used to say: hey, let me take a look at your stored procedure to make sure you are not taking minutes for something that should take split seconds under load. (Else... they will bar you from SQL and force you to do stored procs, not because they are bad people, but to protect the system )

    DB layer that is designed by people that do not know SQL is just not useful. There are some great long standing designs out there by Joe Celko. (What O/R talks about things that Joe Celko talks about?)
    Slowest part of J2EE is data access.

    Consider if you do O/R to use a DAO interface, so you can replace your O/R with something else.

    Truth is, Java people need to learn SQL execution path well. This is same as saying, hey, don't do Front Page to write your HTML, just hand craft the JSP so it's fast, lean, clean. It is silly to say how "bloated" is your HTML and .... not know SQL and how it joins.

    Hth,
    .V
    of course I have a sample app that uses iBatis for DAO (via interface so you can change at basicPortal.sourceforge.net)
  22. For complex/large apps, O/R does not fit.

    > (O/R: OMG, JDO, EJB, Castor, Hibernate, OJB, etc).
    > Start reading here:
    > http://www.agiledata.org/essays/impedanceMismatch.html
    > In practice, on a complex form, it ends up looking relational.

    I don't really understand what forms have to do with persistance and what "looking relational" actually means.

    > Ex: A tax form, has a list of income and a list of expenses. OMG has done a huge disservice by "advertising" that one should not need to know SQL.

    Errrr... I must say that after first sentence, you lost me.

    > A better fit is a SQL data layer.
    > (Ex: JDBC ResultSet, JDBC RowSet, Jakarta Commons-SQL, iBatis.com DB layer).

    Every O/R mapper is also an SQL data layer, isn't it? It's just that I don't have to write it from the scratch. Hand crafted SQL is great for performance tweaks, but it doesn't mean I have to (or want to) write all of the CRUD myself.
  23. Dejan,

    Even with a SQL based layer (like iBatis) the baseDAO can do CRUD for you so you do not have to. It's not either/or. You can't just say I do not know SQL, it must not be important, and be a "hammer" looking for a nail is one of my points.

    I hopefully write better Java code than these "arguments", so take a peak at my code (I have link at other post), and comment, or .... show me a good example of code you wrote for DAO.


    .V
  24. But something like Hibernate buys you CRUD that does not look like CRUD (manipulate your objects and let the O/R mapper worry about persisting them), allows you to navigate child and dependent objects and inheritance heirarchies elegantly (and with lazy-loading of data), take part in XA transactions, built in caching, batched updates, automatically update only dirty data, transparancy across multiple databases, etc. etc.

    Yes, I could write all this myself around JDBC, but why bother if at the end of it I still have to use SQL all over the place? It gets me no performance gain (in fact in most cases its slower), its uglier and its harder to maintain. Knowing SQL is vital, of course - you cannot just develop an OM in a vacuum and disregard DB design completely. Ideally your people should have the skills to do both. But your OM should not just be a java mirror for your DB - thats plain ugly and pointless in all but the simplest of applications.
  25. Tom,

    With respect, I disagree :-)

    .V
  26. Professional Disagree-er?[ Go to top ]

    <Vic>
    Tom,

    With respect, I disagree :-)

    .V
    <Vic>

    Vic,

    I am not sure whether you mean disagreeing is just something that you do as a hobby, or whether you are disagreeing that Tom's name is in fact Tom, or whether you just disagree with every single word that Tom wrote in his post...

    If it is the last option then perhaps you could let us know your rationale for disagreeing with the parts of Tom's post that seemed to me like facts: “Hibernate ... allows you to navigate child and dependent objects and inheritance heirarchies elegantly (and with lazy-loading of data), take part in XA transactions, built in caching, batched updates, automatically update only dirty data, transparancy across multiple databases” – is this false?

    Also I was surprised to hear you disagree with this: “Knowing SQL is vital, of course - you cannot just develop an OM in a vacuum and disregard DB design completely. Ideally your people should have the skills to do both.”

    Apologies for the flippancy, but it annoys me when someone takes the time to post a well-reasoned argument I think it deserves a respectful and reasonable response...

    Regards,
    Lawrie.
  27. Hi,

    <quote by='Tom'>
    Raw sql querying performance is rarely the prime goal of a J2EE app - for anything even remotely transactional using an O/R tool shaves weeks and months off development time, and ends up with far better performance, stability and maintainability.
    </quote>

    This is a very dramatic statement.

    "Weeks and months"?!?
    From where does this values come from? I'm not O/R mapping expert and my only experience with such things is with Hibernate (one of the best I'm told), and from my experience the time saved is null. From my point of view it just seems to replace boring tasks (writing the SQl) with another set of boring tasks (writing xml and metadata). The most tedious part of using SQL is the JDBC code, but that could be eliminated using a JDBC framework, like Spring Framework, or something like this JavaWorld article.
    Can you share with the rest of us the information about what tools you used, and what was your methodology?
    I would be very interested in knowing how I could save "weeks and months off development time"

    "far better performance"... it seems to me that, regardless of the O/R tool used, they all end up executing SQL queries via JDBC, so I find this statement a bit weird.

    "stability" .... I don't understand what you mean by this, could you please explain?

    "maintainability" ... I also don't understand this. Do you mean, for example, that the steps required to adapt the code to the addition of a column to a table are somewhat less painful using a O/R tool than using SQL? If this is what you mean, I disagree.


    <quote by='Tom'>
    [...snip lots of useful stuff ...]
    transparancy across multiple databases
    [ ... ]
    </quote>

    Portability across multiple databases seems to be very important for O/R tools advocates/users (regardless of how infrequently this happens), and since you seem to be one I will take this opportunity to ask you one thing.
    How come the issue of portability among O/R tools almost never come up? Is that not important as well? Does the JDO standard helps somehow with this issue? Is it possible just to plug and unplug different O/R tools?

    <quote by='Tom'>
    Yes, I could write all this myself around JDBC, but why bother if at the end of it I still have to use SQL all over the place?
    </quote>
    No.
    You can externalise your SQL in propertie files or XML files, and you still end up with "pretty to look at code".

    Best regards,
    Luis Neves
  28. Luis,

    <Luis>
    I'm not O/R mapping expert and my only experience with such things is with Hibernate (one of the best I'm told), and from my experience the time saved is null. From my point of view it just seems to replace boring tasks (writing the SQl) with another set of boring tasks (writing xml and metadata).
    <Luis>

    One advantage that springs to mind is that an O/R tool has the potential to check that your Java member variable to Database field mapping is syntactically correct (i.e. compatible data types, variable exists, database field exists, etc). SQL errors will only show up at run-time.

    <Luis>
    The most tedious part of using SQL is the JDBC code, but that could be eliminated using a JDBC framework, like Spring Framework
    <Luis>

    I have not investigated the Spring Framework yet (is there any documentation for it available?) but I am wondering why it is not considered an O/R mapping tool. If it allows you to map your object attributes to database fields without writing any SQL then it must have some element of "mapping" to it... If it can only map an object to one table then I would say it would have been of limited use in any of my recent projects. However, if it can map objects to multiple tables then surely it qualifies to some degree as an O/R tool?

    <Luis>
    "far better performance"... it seems to me that, regardless of the O/R tool used, they all end up executing SQL queries via JDBC, so I find this statement a bit weird.
    <Luis>

    Actually O/R tools need not necessarily use JDBC - they could potentially use the native database API to attain greater performance (like some app servers do).

    <Luis>
    "maintainability" ... I also don't understand this. Do you mean, for example, that the steps required to adapt the code to the addition of a column to a table are somewhat less painful using a O/R tool than using SQL? If this is what you mean, I disagree.
    <Luis>

    This is your personal opinion - but I would much rather an O/R tool that just required a couple of mouse-clicks to map a member variable to a database field thanone field than I would having to amend SQL for create, read, and update statements. Like you say, something like the Spring framework could offer this type of functionality as well, but as I have already stated perhaps it would then start being an O/R tool itself.

    And as Tom also pointed out “Hibernate ... allows you to navigate child and dependent objects and inheritance heirarchies elegantly (and with lazy-loading of data), take part in XA transactions, built in caching, batched updates, automatically update only dirty data.

    If the Spring Framework were to incorporate all this functionality then I would certainly investigate it!

    Regards,
    Lawrie
  29. <quote by='Lawrie'>
    I have not investigated the Spring Framework yet (is there any documentation for it available?)
    </quote>

    There is a book.
    The Spring Framework is analysed in Rod Johnson's book Expert One on One J2EE Design and Development


    <quote by='Lawrie'>
    I am wondering why it is not considered an O/R mapping tool.
    </quote>

    Because it isn't.
    The Spring Framework is a collection of usefull code published in Rod's book, which includes, among other things, a JDBC abstraction layer.

    <quote by='Lawrie'>
    [...] without writing any SQL then it must have some element of "mapping" to it...
    </quote>

    You still have to write the SQL

    <quote by='Lawrie'>
    I would much rather an O/R tool that just required a couple of mouse-clicks to map a member variable to a database field thanone field than I would having to amend SQL for create, read, and update statements.
    </quote>

    Well... if the only thing required is a couple of mouse clicks, then I guess you win :-)
    But, at least for me, amending the SQL is not a big deal.
     
    <quote by='Lawrie'>
    And as Tom also pointed out "Hibernate ... allows you to navigate child and dependent objects and inheritance heirarchies elegantly (and with lazy-loading of data), take part in XA transactions, built in caching, batched updates, automatically update only dirty data."
    </quote>

    All of that can also be done using JDBC/SQL (I'm not sure about the "dirty data" thing, what do you mean by "dirty data"?), but I see the value of having a nicely packaged framework that already does the plumbing work.

    Best regards,
    Luis Neves
  30. <Luis Neves>
    I worry that the way chosen to shield me is to reduce all databases to a least common denominator.
    You spoke above of MySQL and Oracle; the feature set of MySQL is a minimal subset of the feature set of Oracle, and after I paid a pile of money for Oracle I would be a little pissed of if I couldn't use the features for which I paid..
    <Luis Neves>

    Good point about O/R tools using common SQL features across databases. For those people who want to leverage the full features of a relational database by using SQL directly from the application and still want to improve the portability of the application across databases, take a look at Vembu Technologies www.vembu.com . They have a product called SwisSQL which tries to convert SQL from one database dialect to another through a multi-dialect SQL conversion engine. It is still in 1.0 Alpha so they have a long way to go.

    I believe that is good solution for SQL portability. I have only tried their GUI Console and not their API.

    regards,
    Hyther
  31. Luis,

    > All of that can also be done using JDBC/SQL (I'm not sure about the "dirty data" thing, what do you mean by "dirty data"?)

    For the sake of argument (and flying in the face of originality), lets say I have a persistent object, FooBar, with two properties, foo and bar (and some id field). If I load an instance of that bean from the persistent data store, lets say foo = 'foo' and bar = 'bar'. If I set foo = 'new', that value is dirty, and my object is also dirty (ie out of sync with the database). When I then persist it, tools like hibernate are smart enough to recognise that bar has not changed, and omit it from the update.

    eg. UPDATE foo_bar SET foo = 'new' WHERE id = [whatever].

    Try writing hand coded sql statements to do that. Your simple JDBC wrappers and suchlike ultimately work on sql strings you construct yourself. If you want to get behaviour like this you will need:
    A clean copy of your object.
    A 'dirty' copy, modified by the app.
    A great big if...else block comparing the two and building an sql string which you then fire at the database. Or iterating over the entries in your map, if that's what your object is.

    If you were neatly storing all of your SQL in XML files, you can wave goodbye to all that. Anything that requires you to provide your own SQL, will also require you to reinvent this particular wheel if you expect to optimise your queries like this. And this is just the tip of the whole wheel reinvention iceberg :)

    Now personally, I use both - for really complex and performance critical queries, you simply have to write that SQL yourself (particularily if you want some DB specific syntax, optimizer hints etc.). In that case, firing named queries stored in an SQL config file is ideal, as it is the correct tool for the job. But all this CRUD work? Why would I?
  32. < ... dirty data explanation ... >

    Thanks for the detailed explanation Tom, much appreciated.

    Best regards,
    Luis Neves
  33. Lawrie,

    > I have not investigated the Spring Framework yet (is there any documentation for it available?) but I am wondering why it is not considered an O/R mapping tool. If it allows you to map your object attributes to database fields without writing any SQL then it must have some element of "mapping" to it... If it can only map an object to one table then I would say it would have been of limited use in any of my recent projects. However, if it can map objects to multiple tables then surely it qualifies to some degree as an O/R tool?
    >

    Just wanted to add to what Luis already said. The biggest advantage of the Spring Framework's JDBC abstraction layer is that it handles all the nasty exception handling as well as opening and ALWAYS closing the connection for you. You are not forced to use all those try/catch/finally blocks which seems to be 2/3 of the JDBC code.

    As an example, if you have a query that you want a resultset for, then you would extend an ExtractionQuery class and provide the constructor where you spEcify your SQL query and an extract() callback method where you handle each row returned in the resultset. That's all you need.

    public class SpringEmployeeDTOQuery extends
                 com.interface21.jdbc.object.ManualExtractionSqlQuery {

       public SpringEmployeeDTOQuery(DataSource ds) {
          super(ds, "select EMPNO, ENAME, HIREDATE from EMP where EMPNO = ?");
          declareParameter(
             new com.interface21.jdbc.core.SqlParameter(java.sql.Types.NUMERIC));
          compile();
       }

       protected Object extract(ResultSet rs, int rownum) throws SQLException {
          // This is where we populate the DTO object
          EmployeeDTO emp = new EmployeeDTO();
          emp.setEmployeeId(rs.getInt("EMPNO"));
          emp.setName(rs.getString("ENAME"));
          emp.setHireDate(rs.getDate("HIREDATE"));
          return emp;
       }
    }

    Then I could use this query object in an Object Assembler or DAO called from a servlet or a session bean:

    public class SpringObjectAssembler {

       public EmployeeDTO getEmployee(int employeeId) {
          DataSource ds = DataSourceUtils.getDataSourceFromJNDI("mydb");
          SpringEmployeeDTOQuery query = new SpringEmployeeDTOQuery(ds);
          List empList = query.execute(employeeId);
          if (empList.iterator().hasNext())
             return (EmployeeDTO) empList.iterator().next();
          else
             return null;
       }
    }

    Any SQLException that is thrown is wrapped in a runtime exception - so there is no need to handle them unless you have a good reason to do so.

    So, as you can see you have to do the O/R mapping yourself, but you cut out the tedious JDBC plumbing code.

    I think there is a place for both O/R mapping tools and JDBC frameworks - the trick is to choose the right tool for the task. I'm pretty impressed whith what I have seen from Hibernate so far and I hope I will be able to use it in a project soon.

    --Tom Risberg
  34. Spring, DAO, etc[ Go to top ]

    <Thomas Risberg>
    public class SpringEmployeeDTOQuery extends com.interface21.jdbc.object.ManualExtractionSqlQuery {
    //blabla, other DAO stuff
    </Thomas Risberg>

    Hehe! You know what, I'm sure all of us have created such DAO framework in the past. I have created something very similiar and very flexible like Spring, but now I'm switching to Hibernate. Why? Because Hibernate simply is much more flexible and dynamic. It wasn't for time saving at all, because with a DAO framework like this you can indeed do the database parts as fast as an OR mapping framework. A DAO framework is rather static. Hibernate can do lots of sophisticated things almost out of the box for me. Lazy loading objects, automatic outer-join-based fetching, and so on. In fact I beleive one shouldn't use OR frameworks for simple applications but only for big ones! Big apps have more sophisticated data fetching requirements, different data access paths, but you really don't want to end up writing yet another SQL and another if/else in the DAO to handle each new path, you want to have a single domain object model and work with that but still have a good performance. That's what an OR tool should do.

    Ara.
  35. Spring, DAO, etc[ Go to top ]

    Hi,

    <quote by='Ara'>
    Because Hibernate simply is much more flexible and dynamic.
    </quote>

    What does this mean? In what way is Hibernate "more flexible and dynamic"?

    <quote by='Ara'>
    It wasn't for time saving at all, because with a DAO framework like this you can indeed do the database parts as fast as an OR mapping framework.
    </quote>

    That matches my experience also.

    <quote by='Ara'>
    A DAO framework is rather static.
    </quote>

    A DAO framework allows you to switch persistence mechanisms at will, in what way is this static?

    <quote by='Ara'>
    In fact I beleive one shouldn't use OR frameworks for simple applications but only for big ones! Big apps have more sophisticated data fetching requirements, different data access paths, but you really don't want to end up writing yet another SQL and another if/else in the DAO to handle each new path, you want to have a single domain object model and work with that but still have a good performance.
    </quote>

    What are "data access paths"? Are you talking about different applications querying the data? Or are you talking about querie criteria (the "WHERE" clause in SQL)? Something else?

    <quote by='Ara'>
    That's what an OR tool should do.
    </quote>

    Perhaps ... You talk above about "Big apps", I have done some of those, and I wouldn't conceive doing them without making use of some advanced database features.
    Last time I used Hibernate, the use of stored procedures wasn't supported, and searching trough the mailing lists "Stored procedures" is a dirty word... this doesn't look very "flexible and dynamic" to me.

    I'm not sure what an OR tool should do, but I think it should allow me to work with the database the way I want to.

    Best regards,
    Luis Neves
  36. Employee as a JDO Example[ Go to top ]

    Thomas Risberg gave us an example of Employee earlier, involving classes SpringObjectAssembler and SpringEmployeeDTOQuery.

    I thought you might like to see the equivalent using JDO. Here it is:

    package domain;
    import java.util.Date;

    public class Employee {

        private String employeeId;
        private String name;
        private Date hireDate;

        public void setEmployeeId(String id) { employeeId = id; }
        public void setName(String n) { name = n; }
        public void setHireDate(Date d) {hireDate = d; }

        public String getEmployeeId() { return employeeId; }
        public String getName() { return name; }
        public Date getDate() { return hireDate; }
    }

    Of course there's nothing JDO-specific in that class since JDO is effectively transparrent. Oh, you will of course need a persistence descriptor:

    <jdo>
        <package name="domain">
            <class name="Employee"/>
        </package>
    </jdo>

    That should do it. Your application can now persist instances of Employee against any data store for which a JDO implementation exists - could be relational or object db.

    Mapping to an existing schema? Add a few <extension> tags to the persistence descriptor identifying table name and field name. The notation of that mapping is vendor-dependent.

    By the way, the example that Thomas posted was deliberately simple, and this is merely an equivalent. Don't presume that JDO is only applicable for such "simple" models where the domain is just a value object with simple properties. We use JDO to persist highly complex models with collaborative relationships and inheritance/implementation hierarchies where these represent good design choices.

    Kind regards, Robin.
    Robin@OgilviePartners.com
  37. Employee as a JDO Example[ Go to top ]

    Hi Robin

    "Mapping to an existing schema? Add a few <extension> tags to the persistence descriptor identifying table name and field name. The notation of that mapping is vendor-dependent."

    Here is the JDO meta data with JDO Genie extensions to map to the original schema. I did this in less than 1 minute with our GUI mapping tool.

    <class name="Employee">
        <extension vendor-name="jdogenie" key="jdbc-table-name" value="EMP" />
        <field name="employeeId">
            <extension vendor-name="jdogenie" key="jdbc-column">
                <extension vendor-name="jdogenie" key="jdbc-column-name" value="EMPNO" />
            </extension>
        </field>
        <field name="hireDate">
            <extension vendor-name="jdogenie" key="jdbc-column">
                <extension vendor-name="jdogenie" key="jdbc-column-name" value="ENAME" />
            </extension>
        </field>
        <field name="name">
            <extension vendor-name="jdogenie" key="jdbc-column">
                <extension vendor-name="jdogenie" key="jdbc-column-name" value="HIREDATE" />
            </extension>
        </field>
    </class>

    Cheers
    David
    JDO Genie - High Performance JDO for JDBC
  38. Employee as a JDO Example[ Go to top ]

    Good example code, I like that. Not theory discussion again.

    Here is a non O/R (SQL based DAO layer) example:
    http://www.ibatis.com/common/example.html for comparison.

    I find that the SQL based DAO layer is less code and simpler (and since it's simpler, it's faster). I like to see the SQL that will execute.

    If people are using O/R (JDO, etc.) they should have a DAO interface so that they can switch in case of a problem, so beans talk via DAO interface JDO.

    Castor and EJB were popular last year for O/R and.... have been disproved in production. There is a chance that JDO, Hibrante and OJB will as well be disproved in production for complex apps.

    And the SQL people used to do JDBC, now we have iBatis DAO and Jakarta's Commons-SQL. (iBatis is free/open source, also does caching, and give DB independence, etc.)

    I hope there are more arguments with code to back them up as oposed to, my vendor promised 30% savings.

    my 2c, refundable
    .V

    ps. I have my own examples, at basicPortal.sf.net.
  39. Employee as a JDO Example[ Go to top ]

    Good example code, I like that. Not theory discussion again.

    >
    > Here is a non O/R (SQL based DAO layer) example:
    > http://www.ibatis.com/common/example.html for comparison.

    A simple select example. I use this kind of thing a lot (although not for simple object population - big complex queries that need to run as quickly as possible - in such a case, you don't want to add the performance overhead of trying to map your results to an object).

    >
    > I find that the SQL based DAO layer is less code and simpler (and since it's simpler, it's faster). I like to see the SQL that will execute.

    Simpler is not always faster. And simpler sometimes does not necessarily account for the true depth of the problem.

    >
    > If people are using O/R (JDO, etc.) they should have a DAO interface so that they can switch in case of a problem, so beans talk via DAO interface JDO.
    >

    I agree with this, and always use a DAO layer to hide my O/R usage.

    > Castor and EJB were popular last year for O/R and.... have been disproved in production. There is a chance that JDO, Hibrante and OJB will as well be disproved in production for complex apps.

    This is clearly not true. There are lots of EJB apps in production, and while they may have not had the most simple of births, they have certainsly not been 'disproved'. And I have certainly delivered production solutions based on Castor, which have been ticking over nicely for many months. And Hibernate only seems to be growing in popularity.

    >
    > I hope there are more arguments with code to back them up as oposed to, my vendor promised 30% savings.

    Okay, having checked the iBatis examples, there is a bean 'Account' backed by an XML mapping file with its necessary SQL statements. The XML file is 145 lines, of which many are to do with querying, so I'll omit all but the most relevant.

      <result-map name="account-result" class="examples.domain.Account">
        <property name="id" column="ACC_ID"/>
        <property name="firstName" column="ACC_FIRST_NAME"/>
        <property name="lastName" column="ACC_LAST_NAME"/>
        <property name="emailAddress" column="ACC_EMAIL" null="no_email at provided dot com"/>
        <property name="address" column="ACC_ID" mapped-statement="getAddressForAccount"/>
      </result-map>

      <mapped-statement name="getAccount" cache-model="account-cache" result-map="account-result">
        select * from ACCOUNT
        where ACC_ID = #value#
      </mapped-statement>

      <mapped-statement name="deleteAccount">
        delete from ACCOUNT
        where ACC_ID = #id#
      </mapped-statement>

      <mapped-statement name="updateAccount">
        update ACCOUNT set
          ACC_FIRST_NAME = #firstName#,
          ACC_LAST_NAME = #lastName#,
          ACC_EMAIL = #emailAddress:VARCHAR:no_email at provided dot com#
        where
          ACC_ID = #id#
      </mapped-statement>

      <mapped-statement name="insertAccount" parameter-map="insert-params" >
        insert into ACCOUNT (
          ACC_ID,
          ACC_FIRST_NAME,
          ACC_LAST_NAME,
          ACC_EMAIL)
        values (
          ?, ?, ?, ?
        )
      </mapped-statement>

    And then the sample code (transaction and exception handling omitted)

          Account newAccount = (Account) sqlMap.executeQueryForObject("getAccount", new Integer(id));

          newAccount.setEmailAddress("neo at matrix dot net");
          sqlMap.executeUpdate("updateAccount", newAccount);



    The equivalent to all of this in Castor (ommitting transaction and exception handling as above):

        Account = newAccount (Account)jdoDb.load(Account.class, new Long(id));
        newAccount.setEmailAddress("neo at matrix dot net");

    The object will persist when the transaction commits, regardless of whether this is an insert or an update.
    The mapping would look something like:

    <mapping>
      <class name="examples.domain.Account" identity="id">
        <map-to table="ACCOUNT"/>
        <field name="id" type="long" >
          <sql name="ACC_ID" type="integer"/>
        </field>
        <field name="firstName" type="string">
            <sql name="ACC_FIRST_NAME" type="varchar"/>
        </field>
        <field name="lastName" type="string">
            <sql name="ACC_LAST_NAME" type="varchar"/>
        </field>
        <field name="emailAddress" type="string">
            <sql name="ACC_EMAIL_ADDRESS" type="varchar"/>
        </field>
        <field name="address" type="examples.domain.Address" required="true">
            <sql name="ACC_ID"/>
        </field>
      </class>
    </mapping>

    The point is, for a simple object like this, the mapping would be generated on the fly when I build my source. I never need to see it, or even care that it exists. I fail to see how this is not 'less code and simpler'.
  40. Employee as a JDO Example[ Go to top ]

        Account = newAccount (Account)jdoDb.load(Account.class, new Long(id));

    >     newAccount.setEmailAddress("neo at matrix dot net");
     I omitted the following line here:

    jdoDb.create(newAccount);
  41. And therein lies the rub:

        <field name="hireDate">
            <extension vendor-name="jdogenie" key="jdbc-column">
                <extension vendor-name="jdogenie" key="jdbc-column-name" value="ENAME" />
            </extension>
        </field>

    It is very difficult to debug an incorrect mapping.

    I am a big fan of mapping tools (hibernate, ojb, toplink), but this is definitely one of the big cons. That is, the source of the error is not visible through any normal debugging methods. If the types of the mismatched mappings were the same (for example, two Strings), then the error itself might not even show up for a very long time (long after the developer who created the mapping has left the building).
  42. It is very difficult to debug an incorrect mapping.

    >
    > I am a big fan of mapping tools (hibernate, ojb, toplink), but this is definitely one of the big cons. That is, the source of the error is not visible through any normal debugging methods. If the types of the mismatched mappings were the same (for example, two Strings), then the error itself might not even show up for a very long time (long after the developer who created the mapping has left the building).

    This is where unit testing can help. Use a re-creatable test database instance (use HSQLDB if you can't get that) and verify that the O/R mapping is correct. Agreed, its hard to track down a mapping error by conventional means, but putting a full test harness in place means that almost all errors like this are caught immediately. And you can make extensive changes to your OM with confidence. In practice, this is akin to an insert statement which inserts values in the wrong columns (maybe due to order sensistivity) which I personally would find just as annoying to debug without a test harness.

    <tangent>
    Now the problem is justifying extra up-front cost on a project where the timescales and budget are already as tight as they can go, and the most important factor is time-to-market :)

    "Why do you need unit testing? It should just be right first time, otherwise what are we paying you for?"
    </tangent>
  43. Employee as a JDO Example[ Go to top ]

    Robin,

    > By the way, the example that Thomas posted was deliberately simple, and this is merely an equivalent. Don't presume that JDO is only applicable for such "simple" models where the domain is just a value object with simple properties. We use JDO to persist highly complex models with collaborative relationships and inheritance/implementation hierarchies where these represent good design choices.
    >

    Any simple example will surely make the framework or tool with the highest level of abstraction look like the best solution. Nothing wrong with that. I think that the strongest argument for JDO is that it is a standard, that there are competing implementations and that switching implementation is limited to modifying the persistemce descriptors. Add to that the option of automatically generating these descriptors. The only thing we need now is more widespread adoption, especially from the big appserver and database vendors.

    On the other hand, a framework like the Spring Framework does support calling stored procedures as well as querying and updating the database. In the JDO world, is there any support for calling a stored procedure to for instance give every single employee a 10% raise? If not, could you get the connection from the PersistenceManager and write your own JDBC within the same transaction context?

    Regards,

    Thomas Risberg
  44. Employee as a JDO Example[ Go to top ]

    Hi Thomas

    "In the JDO world, is there any support for calling a stored procedure to for instance give every single employee a 10% raise? If not, could you get the connection from the PersistenceManager and write your own JDBC within the same transaction context?"

    Most (if not all) JDO implementations for relational databases provide support for this sort of thing. With JDO Genie you can get a JDBC connection from a PM (same tx context) and call a stored proc to do bulk updates or whatever. If you are using the level 2 cache you can evict the the classes for the tables you modified.

    Cheers
    David
    JDO Genie - High Performance JDO for JDBC
  45. Employee as a JDO Example[ Go to top ]

    I'll add a comment here as well, Thomas.

    As a vendor-independent JDO consultant I have researched and used a number of implementations.

    Firstly the specification does not prescribe any techniques that would only be applicable to Relational data stores. In this way you will see that JDO is technically not an O/R Mapping Standard. It is actually a standard for transparent object persistence. The spec standardized the way that applications should manipulate persistent objects, and the lifecycle of those objects.

    Different implementations implement these behaviour over different data stores. Thus only the Relational implementations are O/R Mapping Tools per se.

    Now to your point. JDO defines a mechanism (the Query interface) through which queries can be executed on the target data store. The default language for such queries is JDOQL. However the spec provides for the use of the same javax.jdo.Query interface to execute queries in other languages.

    Most Relational JDO vendors are using this to facilitate the execution of Queries in SQL, as well as queries that are mapped to Stored Procedures. Now in this context (javax.jdo.Query) these are not usually generic SQL or generic Stored Procs - they must return collections of ObjectId values from which a collection of Hollow objects are created. Thus you have complete control over the SQL that is executed, and yet the semantics of the Query remains the same. (In JDO, all javax.jdo.Query's return a collection of persistent JDO instances).

    The generic-SQL and generic-Stored Proc support is also provided (by most JDO Relational implementations), but outside the Query interface. This is usually done by exposing the underlying JDBC Connection to the application.

    Now, developers who use any of these features (O/R-mapping, Queries in SQL or Stored-Procs, or direct execution of statements over the java.sql.Connection) know that they are working beyond the letter of the spec. They are therefor limiting the portability of their work (please read on before jumping on the soap box!).

    For example:
    - if you use SQL you probably can't port effortlessly to an ODBMS-based JDO implementation.
    - if you use Stored Procs, you probably can't port effortlessly to a JDO implementation without corresponding support or to a back-end database that doesn't support stored procs.
    - if you use O/R Mapping, you can port your application anywhere, but if you move to a different Relational JDO implementation you may have to alter the persistence descriptor.

    None of these is a show stopper. But if you deviate from the letter of the spec you would be wise to appropriately document and encapsulate such deviation, in order to minimize the impact that might later arise.

    This was lengthy, I know, and I appreciate your reading it.

    Kind regards, Robin.
  46. Hello All. The title of this message is deliberately confrontational, and I look forward to the informed, but of course polite, debate that will ensue. Let me put my case:

    I'm please to hear from Tom Jenkinson's post (yes, he writes even longer ones than I do!) that Hibernate automatically persists new or dirty objects on commit.

    This is one of the key features that JDO provides under the banner of Transparent Persistence, a partial list of which would include:

    1. Automatic caching of a before-image of Instance which are changed within a transaction, according to the transaction properties

    2. Automatic change-tracking of Instances, so they become dirty when they are changed

    3. Automatic synchronization of dirty Instances with the data store upon commit (so applications don&#8217;t have to explicitly save changes)

    4. Automatic construction of Instances in the persistence manager&#8217;s cache when an application navigates to them over a persistent reference

    5. Automatic retrieval of data from the data store when fields not in the default fetch group are accessed

    6. Automatic mapping of Java types to the field types of the underlying data store

    7. Automatic persistence of transient objects, which are referenced by persistent fields of an object already or newly persistent, when the transaction commits.

    8. These and everything else JDO does to make the database seem to disappear: the application can primarily operate on Instances in memory and traverse the object model without any concern for the database, yet it is still there and being accessed &#8220;under the covers&#8221;.

    [quoted from "Java Data Objects" by Roos (Addison-Wesley), with my permission!]

    Now to my point about DAO patterns. DAO was formalized by J2EE architects trying to formulate a strategy for making object persistence infrastructures portable across database implementations. This is a laudable aim. However it is flawed when applied to Transparent Persistence technologies such as JDO.

    JDO allows you to build a comprehensive domain model and interact with graphs of persistent objects. JDO abstracts you from the underlying persistence infrastructure implementation. Essentially, JDO _is_ the DAO pattern, but implemented at an Object Persistence abstraction level.

    If you front a JDO implementation with the facade of a DAO pattern implementation you will compromize many of the benefits that Transparent Persistence has to offer.

    For example, all that a JDO-based application HAS to do is:
    1. demarcate transactions (unnecessary if in a J2EE CMT context)
    2. makePersistent() new objects (unnecessary if the new object would have become reachable from an already or newly persistent object).
    3. Occasionally find an object through ObjectId or Query, after which applications merely navigate the singleton/collection references these objects support (by using visible methods).

    DAO implementations usually hinge on methods with semantics like create(), load() and store(). If you front the JDO instances with a DAO pattern you will pollute your applications with unnecessary invocations of persistence-infrastructure methods. Your application will potentially have to:

    - explicitly load() each persistent object that interests you before you can invoke its methods (transparency #4 above)
    - explicitly track of all objects that are changed in the transaction (transparency #2 above)
    - explicitly save() each dirty instance before transaction commit (transparency #3 above)
    - explicitly create() EACH new (transient) instance that is to be stored in the data store (usually covered by transparency #7 above)

    Instead you should consider adopting JDO as your interface to persistence. If you believe the interface is a good one, then you can choose a suitable implementation based on Quality of Service and support for your chosen data store(s). If portability is required over a new database, you merely deploy against an appropriate JDO implementation. If no implementation is suitable then you could consider writing one. Implementing the PersistenceManager contract is not rocket science, and for internal use you would only implement the subset of the contract that was actually required.

    So, I maintain that "DAO patterns compromise Transparent Persistence", and if you can use JDO then you should NOT put a DAO facade between you application and the persistent objects it uses.

    Comments? Kind regards to you all, Robin.
  47. Great constructive debate. Lets see if we can keep it up.
    (nice post Robin)

    There is some debate in O/R if EJB, Cstor, JDO, OJB, or Hibernate is “best” practice (Anecdotally, Hibernate seems most popular O/R). That is why a DAO interface is useful, in case you need to switch, you beans should not need to change, just DAO implementation. So DAO inteface is a good practice.

    An alternative DAO implementation is a non O/R; a SQL-based DAO. (And this is how most production apps are written with low level JDBC-SQL, anecdotal evidence). Now we have iBatis DB layer, Jakarta’s Commons-SQL, etc.

    Not doing best of the 2 choices is OK. I do not think anyone is saying you will fail with O/R. You can build an updatable working application with O/R of your choice. No one is debating that one can make an updateable app. that works, I think anyone can.

    But in different situation/environment, SQL based DAO is a “better” choice relative to O/R, IMO.

    Here is the argument:

    #1. In practice, on a large/complex application you will need to verify that you are running optimal queries, because it is possible that some screens will be slow.
    “Large/Complex” implies complex joins (correlated, self, etc. etc.) Have you been horrified to see how people join 6 way on a client site?
    Even using ANSI SQL, a few changes can make a very big difference.
    So you need to be able to:
    a. do a show plan of the execute path
    b. touch up the sql
    This is harder with O/R. In iBatis, etc. I can see the SQL String and work with it. It’s an issue of accessibility. O/R side might say well if I wanted I could add SQL. Great, that is what I am saying. Once you do SQL strings, O/R is overhead, if you did not use it.

    #2. By large, also you would have lots of concurrent users/data. Some people compare the development environment O/R DAO vs SQL DAO. In development you have hundreds of rows of data and maybe 10 developers (concurrent users). Any query will run fine (it’s cached in memory of any DB engine).
    But if you have thousands of concurrent users and 100+ gigs of data, which is possible with internet, the queries could create hotspots, slow screens, and you need to tune them then. So now you have to work on a production system.
    Of course, if your production DB fits on a laptop, this does not apply.

    Expertise in SQL, does not mean that one does not know Java, as sort of implied by some posters. Also one can use a generator with a SQL based DAO. It’s not fun writing getters/setters.

    Tuning O/R is working on the symptom, and not the cause. Some say “use a better O/R” or fine tune you caching. Alternative is to say users have to wait to get the data, lets tune SQL (iBatis provide built in caching, so comparing SQL DOA caching to O/R caching is not valuable, cache is cache)

    Slowest part of J2EE is data access. We have all been on slow web sites, and ADO.net is making a move in J2EE (MS ADO.net works similar to SQL-DAO), but with SQL based Java DAO we can have responsive sites. Also, looking/evaluating at a SQL based DAO will not hurt you.

    Summary:
    · In some theory, O/R is better than Relational
    · On smaller/simpler sites O/R and SQL-DAO are similar.
    · Complex/large apps imply complex Joins so SQL based DAO works better in practice/production.

    .V
  48. Hi Robin,

    BTW, a very similar issue is discussed in the following thread:
    Simplifying Domain Model Persistence Using Java Data Objects
     
    Concerning the key features: As far as I see, everything except default fetch groups is supported by reflection-based persistence toolkits like Hibernate too. And the value of default fetch groups when working with relational databases is very debatable.
     
    <quote>
    - explicitly load() each persistent object that interests you before you can invoke its methods (transparency #4 above)
    - explicitly track of all objects that are changed in the transaction (transparency #2 above)
    - explicitly save() each dirty instance before transaction commit (transparency #3 above)
    - explicitly create() EACH new (transient) instance that is to be stored in the data store (usually covered by transparency #7 above)
    </quote>
     
    I agree that another CRUD-style DAO abstraction layer doesn't add any benefit. But I do believe that data access services are good practice (please see my latest post in the other thread), being somewhat higher-level than DAOs. These services provide high-level data access methods as needed by business logic services. Of course, in many applications, data access and business logic services will merge if there isn't significant value in separating the two. In any case, such services suit the needs of MVC web applications perfectly, as controllers will use them to retrieve or modify data and provide the results as model to the view.
     
    For example, a "List loadProjects(ProjectSearchParameters psp)" method could load all Project instances that match the given search parameters. Both Project and ProjectSearchParameters are domain objects without dependency on a specific persistence strategy. A Project could represent an object tree, containing tasks, employees, etc. For this read-only case, the interface of the data access service will feature just this method, the underlying implementation has to care for fulfilling the contract, i.e. retrieving the appropriate object trees to a specified detail level. The implementation can use an O/R toolkit to achieve this, of course, and benefit from its services including automatical subtree loading.

    Note that it is not acceptable for the caller of this service to get lazy-loading proxies, as the underlying transaction has already terminated and the connection been returned to the pool. So the objects have to be preinitialized to avoid lazy initialization errors, which is possible with both Hibernate and JDO, AFAIK. A different solution would be to keep the transaction open until the view has finished rendering, but this can lead to data access errors in the view, and it definitely leads to spreading the data access and cleanup code around your application - which MVC tries to avoid in the first place. Standalone applications are different in this respect, but if one wants to reuse the business logic services, the same rules apply.

    A special issue is handling transactional updates in a request/response environment like the web. Long transactions spanning multiple requests aren't an option. So one often needs to read data in one request/transaction, expose it to the UI layer which is able to modify it, and update it in another transaction. Easy reassociation with a new transaction is an issue here, as is support for modification detection between transactions and optimistic transaction techniques like versioning or timestamps. The data access services for these requirements will look like DAOs but can manage dependent object trees implicitly if necessary. They need to handle first class objects explicitly but can handle second class objects implicitly, in JDO terminology.

    The latter use case doesn't seem to be a major focus of JDO, but it is explictly supported by Hibernate. Can you comment on the support for this in current JDO implementations?
     
    Of course, such data access services do not provide code-level portability between persistence toolkits. But they do allow for easy reimplementation of such use-case-oriented data access functionality, with required code changes being as few and local as possible. Certain implementations can work with O/R mapping while others can use plain JDBC, and there can be tuned implementations of some data access interfaces that are just used for certain installations of the system, e.g. only on top of certain databases.

    All things considered, I don't see a necessity for a single persistence standard for applications layered this way. I'd rather have multiple O/R mapping toolkits without a single standardized API but with specific niceties than numerous unsophisticated least common denominator ones. The concepts are similar anyway, and I prefer to be able to use the specific functionality that the toolkit of my choice offers. The best example are probably query languages: Hibernate's is far more sophisticated than JDOQL, and new features are added within weeks if they make sense. And in the case of TopLink, the issue is similar to choosing Oracle as database: Why pay a huge license fee but then restrict yourself to a small subset of the funtionality that it offers?

    And if I would really choose to migrate the application from Hibernate to JDO (or rather to some specific JDO implementation and its extensions) or TopLink or whatever some time, then this would represent a significant change that would need intense retesting of the whole application, and thus would only take place for a new major release anyway. So I don't mind adapting some application code, as long as the affected code is not spread all around the application.

    BTW, I consider the idea of switching your JDO implementation on the fly resp. per installation of your system a marketing gag, at least for real-life applications. It's like switching your database: For non-trivial apps, JDBC and SQL allow for a high degree of portability but not without any changes to your code at all, due to subtle or even not so subtle differences in SQL syntax and JDBC implementation. O/R mapping toolkits are simular: Mapping descriptors and the actual semantics of the access code are very tricky things when you look at the details. Switching persistence infrastructure for a sizable project simply isn't as trivial as one could assume, on any level.

    Juergen
  49. Spring and Hibernate[ Go to top ]

    <Tom Risberg>
    The biggest advantage of the Spring Framework's JDBC abstraction layer is that it handles all the nasty exception handling as well as opening and ALWAYS closing the connection for you. You are not forced to use all those try/catch/finally blocks which seems to be 2/3 of the JDBC code.
    </Tom Risberg>

    Spring's callback approach is indeed very valuable. JDBC code gets very maintainable, especially easier to grasp. The lines that actually perform work are not buried under lots of plumbing and exception handling code...

    <Tom Risberg>
    I think there is a place for both O/R mapping tools and JDBC frameworks - the trick is to choose the right tool for the task. I'm pretty impressed whith what I have seen from Hibernate so far and I hope I will be able to use it in a project soon.
    </Tom Risberg>

    I absolutely agree. Some applications or at least some tasks do not match an object model but are rather of the data processing kind. JDBC frameworks like Spring's that give you the full power of SQL in a convenient way are a perfect match here.

    Other applications demand a fine-grained object model, and this is the kind that I've personally been confronted with more often. Currently, we are using Hibernate in a new project and are very pleased with it so far. Of course, a nice JDO implementation would probably do the job too, although assumably only with certain vendor extensions. I really like Hibernate's sophistication, it goes far beyond standard JDO, and it integrates nicely into a J2EE environment.

    BTW, the new project I've just mentioned uses Spring for service infrastructure and web MVC, Hibernate for the data access layer, and a JNDI DataSource for connection pooling. It will use high-level transaction demarcation via Spring's transaction support (currently under development) on top of JTA soon. We emphasize testing heavily and are thus pleased by the ease of testability outside of an application server, due to Spring's mock JNDI and JDBC DataSource stuff. A really fine combo, our application project did not have to develop any noteworthy custom infrastructure by now.

    There's one exception: We have developed a HibernateTemplate/HibernateCallback that applies the same callback approach to Hibernate as Spring's JdbcTemplate does to JDBC. I really can't stand all those finally blocks and checked exceptions even on close() calls anymore. The callback code leads to simpler and more readable code than Hibernate session closing/exception handling code, just like with JDBC. I will probably release the Hibernate callback stuff as a Spring extension in the near future, in a later stage of the project.

    Finally, regarding Spring releases: We plan to release 0.8 very soon, and 1.0 around end of May. The framework is already very stable, but there's still a lot of work to do in terms of documentation. Rod's book is very helpful, of course, as it discusses many of Spring's ideas in detail. But note that there has been a lot of work since the framework version presented in the book. Especially the transaction handling and the AOP stuff still need to get documented properly. In the meantime, anyone that isn't afraid of browsing JavaDoc resp. source code is invited to have an immediate look.

    Juergen
    (Spring Framework developer)
  50. Hi Luis

    ""using an O/R tool shaves weeks and months off development time, and ends up with far better performance, stability and maintainability.""
    "This is a very dramatic statement.Weeks and months"?!? From where does this values come from?"

    On a recent project of ours the leader estimated that using JDO Genie had cut at least 1/3 of the time from the project. Being a brand new project they could use the schema generated from the classes with a few customizations. Savings on projects involving mapping to existing schemas are less but still significant. You really have to try JDO for yourself.

    ""far better performance"... it seems to me that, regardless of the O/R tool used, they all end up executing SQL queries via JDBC, so I find this statement a bit weird."

    Good JDO implementations (and O/R mappers) will use statement batching and other features rarely seen in hand written code.

    JDO implementations also do caching at various levels. The JDO spec provides for local caching in the PersistenceManager. Nearly all JDO implementations for relational databases also provide a level 2 cache shared by all PMs.

    ""stability" .... I don't understand what you mean by this, could you please explain?"

    The SQL is automatically generated and always correct. Hand written SQL is hard to maintain and bugs in SQL only come out at runtime.

    "Portability across multiple databases seems to be very important for O/R tools advocates/users (regardless of how infrequently this happens)"

    Its true that not that many apps need to be deployed onto different database servers. However it is very common for databases to change between projects. This week Oracle, next month new project on MySQL. JDO shields you from having to be an expert in all the different databases and JDBC drivers out there. The JDO implementation has to handle all the tricky incompatabilities and bugs and you can concentrate on the business code.

    "How come the issue of portability among O/R tools almost never come up? Is that not important as well? Does the JDO standard helps somehow with this issue? Is it possible just to plug and unplug different O/R tools?"

    This is exactly the issue that the JDO standard covers. If you switch JDO vendors you will need to redo your mapping but your code stays the same.

    Cheers
    David
    JDO Genie - High Performance JDO for JDBC
  51. Hi,

    <quote by='David'>
    On a recent project of ours the leader estimated that using JDO Genie had cut at least 1/3 of the time from the project.
    </quote>

    That is indeed impressive.
    I must confess that my only work with O/R tools was done with already existing databases, because of that, my lack of perception of "saved time" might be a little off base.

    <quote by='David'>
    The SQL is automatically generated and always correct. Hand written SQL is hard to maintain and bugs in SQL only come out at runtime.
    </quote>

    I keep hearing this, and I still don't understand it.
    Most databases (at least the ones I work with) have a Visual Query Designer that generates SQL, so the correctness of the SQL is assured.
    Even when I write the SQL by hand, the first thing I do is to run it in the database first, I only use the SQL in the application after I make sure it works.
    There are even tools like Tora that allow the debugging of SQL, there is little excuse to produce non-functional SQL code.

    <quote by='David'>
    However it is very common for databases to change between projects. This week Oracle, next month new project on MySQL. The JDO implementation has to handle all the tricky incompatibilities and bugs and you can concentrate on the business code.
    </quote>

    That is an excellent point.

    <quote by='David'>
    JDO shields you from having to be an expert in all the different databases and JDBC drivers out there. The JDO implementation has to handle all the tricky incompatibilities and bugs and you can concentrate on the business code.
    </quote>

    I worry that the way chosen to shield me is to reduce all databases to a least common denominator.
    You spoke above of MySQL and Oracle; the feature set of MySQL is a minimal subset of the feature set of Oracle, and after I paid a pile of money for Oracle I would be a little pissed of if I couldn't use the features for which I paid.
    And using stored procedures, triggers and assorted database functionality also saves a lot of time.

    <quote by='David'>
    This is exactly the issue that the JDO standard covers. If you switch JDO vendors you will need to redo your mapping but your code stays the same.
    </quote>

    OK, that's good to know, and I just shows my ignorance about this matters.

    Best regards,
    Luis Neves
  52. Hi Luis

    "<quote by='David'>
    The SQL is automatically generated and always correct. Hand written SQL is hard to maintain and bugs in SQL only come out at runtime.
    </quote>
    I keep hearing this, and I still don't understand it ... Even when I write the SQL by hand, the first thing I do is to run it in the database first, I only use the SQL in the application after I make sure it works."

    I was thinking more of maintenance e.g. when the model changes in future. You now have to go and check all of the SQL in your app using the modified tables. If you have all of this in resource files or something this could be automated. However not many projects are that good about keeping SQL out of the Java code.

    "I worry that the way chosen to shield me is to reduce all databases to a least common denominator.
    You spoke above of MySQL and Oracle; the feature set of MySQL is a minimal subset of the feature set of Oracle, and after I paid a pile of money for Oracle I would be a little pissed of if I couldn't use the features for which I paid.."

    Its true that this does happen to a certain extent. However it is up to the JDO implementation to provide access to the advanced features of a given database. All the JDO implementations I know of provide a way to run SQL directly for the small percentage of things in the project that need it.

    Cheers
    David
    JDO Genie - High Performance JDO for JDBC
  53. \Tinker\
    On a recent project of ours the leader estimated that using JDO Genie had cut at least 1/3 of the time from the project. Being a brand new project they could use the schema generated from the classes with a few customizations. Savings on projects involving mapping to existing schemas are less but still significant. You really have to try JDO for yourself.
    \Tinker\

    Truly, I don't understand this. SQL isn't that difficult, database schema creation isn't that difficult.

    I can't think of any project where 1/3 of the time was spent on creating tables, writing SQL, and mapping to Java semantics through JDBC.

    Perhaps alot of this goes back to relative experience. Someone with experience in a given database just doesn't have that much trouble creating schemas and getting to/from Java using JDBC. But if someone were fairly ignorant of SQL in general, or JDBC, or the particulars of physically creating a database in the RDBMS of your choice, then I can see how something like the product you describe could help out.

    \Tinker\
    The SQL is automatically generated and always correct. Hand written SQL is hard to maintain and bugs in SQL only come out at runtime.
    \Tinker\

    Never say "always correct" when talking about auto-generated anything :-) I think perhaps you're overstating SQL maintenance.

    \Tinker\
    Its true that not that many apps need to be deployed onto different database servers. However it is very common for databases to change between projects. This week Oracle, next month new project on MySQL. JDO shields you from having to be an expert in all the different databases and JDBC drivers out there. The JDO implementation has to handle all the tricky incompatabilities and bugs and you can concentrate on the business code.
    \Tinker\

    I understand where you're coming from here, but I think it's a flawed premise. Perhaps developers don't need to be experts in the RDBMS they're working with, but they should be very knowledgable in it. The fact is, it's rather dangerous for a developer to be persisting anything without understanding what's happening at the database level. Ignorance is not bliss in such cases!

        -Mike
  54. <Mike>
    Truly, I don't understand this. SQL isn't that difficult, database schema creation isn't that difficult.
    ...
    But if someone were fairly ignorant of SQL in general, or JDBC, or the particulars of physically creating a database in the RDBMS of your choice, then I can see how something like the product you describe could help out
    <Mike>

    True. But do you want to be doing this all the time ? Wouldn't it be better if this is done for you automatically ? And the SQL that is generated also is always correct and highly optimized. There is no need to 'verify the queries in a Qeury Analyzer' for each and every query. All this adds up. Ans when you have to change your application model, these changes will result in a modified schema and you would have to do this manually - there may be projects where once the model/schema is designed there will be no more changes, but in reality many people would like to have the option to change for various reasons and that is where you can save time by relying on these tools.

    And not all applications start of from a object model. We have a customer who wanted to reverse engineer a schema with close to 500 tables into a Java Object Layer. Imagine the time involved in this - this was achieved, with the help of our tools, in a couple of hours.

    <Mike>
    The fact is, it's rather dangerous for a developer to be persisting anything without understanding what's happening at the database level. Ignorance is not bliss in such cases!
    <Mike>

    Though I wouldn't call it dangerous, the knowledge will help.

    regards
    Rajesh
    (S Rajesh Babu)
    ObjectFrontier Inc
    www.ObjectFrontier.com

    FrontierSuite for JDO - Enterprise Solution for JDO Applications
  55. \Suraparaju\
    True. But do you want to be doing this all the time ? Wouldn't it be better if this is done for you automatically ? And the SQL that is generated also is always correct and highly optimized. There is no need to 'verify the queries in a Qeury Analyzer' for each and every query. All this adds up. Ans when you have to change your application model, these changes will result in a modified schema and you would have to do this manually - there may be projects where once the model/schema is designed there will be no more changes, but in reality many people would like to have the option to change for various reasons and that is where you can save time by relying on these tools.
    \Suraparaju\

    I don't advocate writing everything by hand - I think I'd be bored to death rather quickly doing that. I use various tools and custom scripts to get the CRUD out of the way without the need for hand coding it. I also keep 99% of my SQL out of the code, and in configuration files, so that minor schema changes or name changes in columns/tables don't require a recompile.

     This gets the advantage of not hand coding, but at the same time, all of the guts of the SQL are exposed at all times for tweaking.

    On the SQL efficiency size - I don't really understand your comments on highly optimized SQL. You can't really generically optimize SQL - it depends on what your app is doing, and what other apps are doing with the database, and always depends on external factors - especially the RDBMS configuration, index usage, etc.

    \Suraparaju\
    And not all applications start of from a object model. We have a customer who wanted to reverse engineer a schema with close to 500 tables into a Java Object Layer. Imagine the time involved in this - this was achieved, with the help of our tools, in a couple of hours.
    \Suraparaju\

    I don't know anyone who would do this by hand. Fortunately, it's rather trivial to do via scripting.

    <Mike>
    The fact is, it's rather dangerous for a developer to be persisting anything without understanding what's happening at the database level. Ignorance is not bliss in such cases!
    <Mike>

    \Suraparaju\
    Though I wouldn't call it dangerous, the knowledge will help.
    \Suraparaju\

    Developers creating applications in ignorance of one or more of its major technology pieces (like the database) are an accident waiting to happen. If people believe they can build enterprise-level applications with only passing knowledge of their underlying RDBMS they're going to run into big problems as their application grows.

         -Mike
  56. Quote----------
    How come the issue of portability among O/R tools almost never come up? Is that not important as well? Does the JDO standard helps somehow with this issue? Is it possible just to plug and unplug different O/R tools?
    ---------------

    Probably the issue never came up because there was no possibility of compatibility. And JDO standard does address this by providing you portability acorss JDO implementations (O/R Mappers if you look behind the screen). And it is more or less like plugging and unplugging in different implementations - with very little deployment activities.

    regards
    Rajesh
    ObjectFrontier Inc
    www.ObjectFrontier.com

    FrontierSuite for JDO - Enterprise Solution for JDO Applications

  57. > "Weeks and months"?!?
    > From where does this values come from? I'm not O/R mapping expert and my only experience with such things is with Hibernate (one of the best I'm told), and from my experience the time saved is null. From my point of view it just seems to replace boring tasks (writing the SQl) with another set of boring tasks (writing xml and metadata).

    Every O/R tool I've used allows you to in some way generate your mapping from your object model, be that by running a specific generation tool or embedding javadoc comments which are processed by something like XDoclet. In the majority of cases, you write your beans and your done. (Or if you want, write your mapping and let it generate yuor beans). Writing the CRUD sql for a large object model takes *significantly* longer than using a generated mapping file, and editing it by hand for the 1% of cases that can't


    > I would be very interested in knowing how I could save "weeks and months off development time"
    >

    Well, its removing a whole layer from your development process. It seems to me that writing code that translates your database contents into your object model is always going to be slower than not writing that code. Especially if you try and get that code do do stuff that an O/R tool does anyway (caching, optimistic/pessimistic locking, id allocation strategies, versioning, update batching etc. etc.). Reinventing the wheel is always going to take longer.


    > "far better performance"... it seems to me that, regardless of the O/R tool used, they all end up executing SQL queries via JDBC, so I find this statement a bit weird.
    >
    Yes they do, but they often end up writing better SQL - updating only fields that have changed, for example, or batching statements, or using lazy loading.

    > "stability" .... I don't understand what you mean by this, could you please explain?
    >
    Stability at runtime and also in your code. A mature O/R tool generating decent DB access code is less prone to bugs and inconsistencies than hand-coded SQL performing the same task. As you say, writing the JDBC and SQL code is boring, and bored developers can easily make mistakes. Debugging the DB access code is a portion of your development cycle - wouldn't it be nice to remove that time entirely? Add to that guaranteed type-safety, and the fact that table/field naming changes/additions that aren't relevant to your application don't require a recompile of your object model.



    > "maintainability" ... I also don't understand this. Do you mean, for example, that the steps required to adapt the code to the addition of a column to a table are somewhat less painful using a O/R tool than using SQL? If this is what you mean, I disagree.

    As an example, lets say someone implementing a bunch of your CRUD code, writes a statement like "INSERT INTO [TABLE_NAME] VALUES (?, ?, ?)", which is completely reliant on the ordering and number of columns in your table (it happens). This is never a problem with O/R mapping.

    Or all of your tables are referred to by [SCHEMA].[TABLE_NAME] and the schema name is changed, or some tables are moved to a different schema. Or a table name is changed. With an O/R tool, there is one single line which will need changing - all of my querying and addressing of data is done at the object model level, so as long as my object maps to the correct table, I don't have to worry - hell, I could change it at runtime. If you are building all of your SQL manually, even if you're string it in external config files, you will have to find every reference to a changed name and alter it. Yes its brainless, find-and-replace stuff, but I can guarantee I'll finish first, simply because there is less to do.

    >
    >
    > <quote by='Tom'>
    > [...snip lots of useful stuff ...]
    > transparancy across multiple databases
    > [ ... ]
    > </quote>
    >
    > Portability across multiple databases seems to be very important for O/R tools advocates/users (regardless of how infrequently this happens), and since you seem to be one I will take this opportunity to ask you one thing.
    > How come the issue of portability among O/R tools almost never come up? Is that not important as well? Does the JDO standard helps somehow with this issue? Is it possible just to plug and unplug different O/R tools?

    Database portability is not paramount for me, but the fact that I can develop against a local instance of PostGres, Unit test with a lightweight embedded HSQLDB instance and deploy to Oracle with confidence, makes my life easier.

    I could care less about O/R tool portability - its simply not the case that I change tools mid-project. However, given that most O/R tools persist POJOs, and that JDO defines an API for dealing with them, you should be able to switch vendors with only the hassle of updating your mapping (which can usually be generated anyway) - although I have to say, I'll believe that when I see it :)


    >
    > <quote by='Tom'>
    > Yes, I could write all this myself around JDBC, but why bother if at the end of it I still have to use SQL all over the place?
    > </quote>
    > No.
    > You can externalise your SQL in propertie files or XML files, and you still end up with "pretty to look at code".

    So? I still have to write (and maintain) the same statements over and over again, for each object (INSERT INTO foo, UPDATE bar). And then I have to get my results out of JDBC, and do what with them? Put them in a map and access them by name (losing all my type safety)? Manually populate a value object? Why, when a tool can do all this for me? Dont' get me wrong, I think it is very important to understand the vitals of data access, but there are *so many* tools that make it easier, I am surprised people keep writing 'simple DAO layers' or 'JDBC wrappers'. Most of these provide barely more abstraction than straight JDBC, and buy you none of the extra functionality of a tool like , say, Castor. What's the point, other than as a learning exercise?
  58. Maintainability & Portability[ Go to top ]

    Thanks for your well considered response, Tom.

    The other part of "Maintainability" is that changing business needs can be accommodated through changing the domain object model. The presence of a competent JDO implementation massively reduces the headache of accommodating the impact of these changes.

    Also, if you write a persistence infrastructure you must support that home-written infrastructure. This can be a resource drain on projects.

    If you use someone elses framework or proprietary tool then you are relient upon them for evolving and supporting the product.

    If you use JDO, you can jump ship if your chosen vendor falls out of favour.

    As far as Portability goes, perhaps that website session context database you wrote to cache per-session data would run better as an object database after all.... Now you can trial that without having to rearchitect the application.

    Kind regards, Robin.
    Robin@OgilviePartners.com
  59. For complex/large apps, O/R does not fit.

    >(O/R: OMG, JDO, EJB, Castor, Hibernate, OJB, etc).
    >Start reading here:
    >http://www.agiledata.org/essays/impedanceMismatch.html

    You seem to have misunderstood what the object-relational impedance mismatch is. The whole point of an O/R tool is to overcome this mismatch, and allow your DB design to play to the strengths of the database, and your object model to be maintainable and use standard object semantics. The article is *supporting* the use of O/R mapping tools.

    Raw sql querying performance is rarely the prime goal of a J2EE app - for anything even remotely transactional using an O/R tool shaves weeks and months off development time, and ends up with far better performance, stability and maintainability. Complex and large apps are exactly where O/R mapping fits.

    O/R mapping is not about entirely separating DB and OM design (ie. one team does one, one team does the other, and everything magically works in the end), communication when designing the two is vital. But you are not tied to trying to emulate a relational model in an OO language.
  60. SQL is must know[ Go to top ]

    100% agree, can not find better example. SQL is already an abstract layer on top of DB Library. It is must know for any developer. This is the language designed to talk to the Databases. If this is too difficult – do not use general purpose languages (C, Java …), there are some Web site builders, where the basic Web site could be built in minutes.
    But we probably need a little more control over the code. The more abstract the tool – the less flexible it is.
  61. SQL is must know[ Go to top ]

    SQL is already an abstract layer on top of DB Library


    I agree, althought that SQL is not as you wanted it to be -> "portable".

    Oracle has support for standard outer join syntax starting from version 9.x (earlier versions don't). That's pretty awful if you ask me. Oracle also has no support for serializable transactions (correct, if I'm wrong - but don't say that it supports, because documents say so... rollback segment has its own difficulties that no other new db vendors seem to like to scope with). Still it seems to be one of the most regarded dbs.

    I have used entity beans that I didn't like much about.
    I have used hibernate that I sort like about.
    I have used SQL that I like.

    Ok... then you ask me about how I map SQL to my Java objects. Value objects, maybe? No that's not for me. Maps and dynamic structures like DynaBeans seems the way. Maybe I'm wrong, but for me the db is the thing and here to stay. Many of you talk about db independend approach, but I'm talking about language independence -> not many db's are so fixed with programming languages that it's hard to make a change to another db (modifications are surely needed). And yes there is DBs that are designed at 60's. Those dbs are interfaced with many programs, whether they are written in cobol, c, vb or java or whatever. The point is that db independend programming is not that important as someone wants to say. With a fair amount of work you can support as many of db's as you like (e.g. using dao approach), but still optimize to that specific db (maybe using stored procedured etc.). My advice is to write program to one db and when it's needed you can always port it to another (maybe then you write a layer that abstracts these db's from another or maybe you use some configuration parameter that makes the difference).

    What I think is that Java is going to die somaday. When it happens, SQL is still working and used (it has happened before, it will happen to Java too).
  62. SQL is must know[ Go to top ]

    Aapo,

    Maybe I'm wrong

    No, you are 100% right.

    Regards
    Rolf Tollerud
  63. SQL is must know[ Go to top ]

    Oracle also has no support for serializable transactions (correct, if I'm wrong

    >

    OK, you are wrong.

     set transaction isolation level serializable;

    works for me - have used it in 9i and 8i.

    The default is "read comitted". I beleive those are the only isolation levels supported.

    --Tom Risberg
  64. Oracle also has no support for serializable

    >> transactions (correct, if I'm wrong
    >
    > OK, you are wrong.
    > set transaction isolation level serializable;

    I knew that someone comes with this. Infact I should have warned you. Oracle's serializable isolation level IS NOT serializable. Oracle's serializable isolation level is some sort of repeatable read, but not serializable as it's described in ansi (-92) standard:

    "The execution of concurrent SQL-transactions at isolation level
    SERIALIZABLE is guaranteed to be serializable. A serializable
    execution is defined to be an execution of the operations of
    concurrently executing SQL-transactions that produces the same effect
    as some serial execution of those same SQL-transactions. A serial
    execution is one in which each SQL-transaction executes to completion
    before the next SQL-transaction begins."
  65. Aapo,

    You set me up :-)

    > >> Oracle also has no support for serializable
    > >> transactions (correct, if I'm wrong
    > >
    > > OK, you are wrong.

    > > set transaction isolation level serializable;
    >
    > I knew that someone comes with this. Infact I should have warned you. Oracle's serializable isolation level IS NOT serializable. Oracle's serializable isolation level is some sort of repeatable read, but not serializable as it's described in ansi (-92) standard:
    >
    > "The execution of concurrent SQL-transactions at isolation level
    > SERIALIZABLE is guaranteed to be serializable. A serializable
    > execution is defined to be an execution of the operations of
    > concurrently executing SQL-transactions that produces the same effect
    > as some serial execution of those same SQL-transactions. A serial
    > execution is one in which each SQL-transaction executes to completion
    > before the next SQL-transaction begins."

    The Oracle implementation of serializable seems to prevent the following inconsistencies:
     Dirty read
     Non-repeatable read
     Phantom rows

    Is that not true, or are you looking for something else?

    Tom Risberg