Hard Core Tech Talk with Dennis Leung, VP TopLink Development

Discussions

News: Hard Core Tech Talk with Dennis Leung, VP TopLink Development

  1. In this interview, Dennis discusses some J2EE persistence issues and challenges such as the object-relational impedence mismatch, and the benefits of O/R mapping products such as TopLink. He looks at the pros and cons of entity beans, JDO support in the industry, compares the persistence capabilities of .NET vs J2EE, and discusses why Oracle aquired TopLink from WebGain.

    Watch Dennis Leung's Interview Here.

    Dennis Leung is an original member of the team that delivered the first Smalltalk version of TopLink in 1994 and the subsequent TopLink for Java product in 1997.

    Threaded Messages (98)

  2. Excellent interview[ Go to top ]

    I agree with just about everything Dennis says. And it's good to see TSS post the text now, instead of having to play the video!

    I particularly like his points about the importance of DBAs on J2EE projects. I've seen some serious failures where J2EE developments cheerfully define the schema purely from their object model, and sideline the people who know most about efficient and manageable use of the database.

    Also, the point that "You just have to live with the database schema that's existing and some of the rules that are associated with it...I don't want my object model to simply mirror what my data model looks like." A 1:1 mapping between "objects" and tables, as we usually see with entity beans, does not really solve the impedance mismatch.

    Personally I would go much farther in criticism of the inadequacies of the entity bean model. It meets so few of the goals for O/R mapping Dennis identifies!

    Regards,
    Rod Johnson, author of Expert One-on-One J2EE Design and Development
  3. text version ?[ Go to top ]

    erm, where do you "see TSS post the text now"?

    thanks,
    Christian
  4. text version ?[ Go to top ]

    Erm, the "TEXT" links below the questions in the video TOC? ;-)

    Of course, it would be nice if TSS provided a directly accessible full text version of the interviews instead of that workaround...

    Juergen
  5. JDO and co[ Go to top ]

    I also share Dennis' basic view, as it reflects actual requirements instead of one-size-fits-all hammering. A welcome change in the marketing-glutted J2EE world!

    Concerning JDO and Dennis' statements on it: I agree that the biggest issue of JDO is that it requires either byte code or source code manipulation, mainly to enable modification detection via the PersistenceCapable interface. Other toolkits like TopLink, CocoBase, Hibernate, and OJB rely on reflection and compare potentially modified objects to initial copies created at load time. I wonder since its inception why JDO enforces a certain approach instead of leaving the choice to the toolkit vendor.

    Due to this main flaw and other reasons, there seems to be only half-hearted adoption of JDO. The respective designers of CocoBase and Hibernate deliberately chose to ignore it and continue to go their own reflection-based way. Even TopLink and OJB rely on reflection internally in their persistence kernels and adopt JDO as one of their high-level APIs, trying to bridge the gap but not being able to achieve full JDO 1.0 compatibility with this approach. So there's neither full support from the major commercial players nor from any significant open source projects.

    Of course, JDO could ease the implementation restrictions in version 2.0 of the spec. Unfortunately, numerous classes and interfaces are designed around the PersistenceCapable interface that runtime business objects have to implement, so this may turn out hard to achieve, especially in combination with 1.0 compatibility. In the end, this basic design flaw may be the reason why JDO will never really take off in the O/R world and remain a niche API for object database access.

    Concerning persistence APIs in general: I don't consider absolute code portability a goal, rather it's easy migration from one toolkit to another. This particularly means no implication on the source code of business objects, which is enabled by both relection-based and JDO-based toolkits. This basically leaves the CRUD calls, and the transaction demarcation if not using JTA. And even CRUD calls can be isolated in DAO classes.

    So I'm actually not a proponent of including JDO in the J2EE umbrella, as I don't think that there's a definitive need for widespread JDO adoption in the J2EE community. Rather, I'd say that there's the need for widespread adoption of lightweight persistence toolkits instead of Entity Beans. TopLink and CocoBase are the long-established major players; Hibernate, OJB, and Castor the most interesting open source projects; and the currently available commercial JDO-based toolkits just another option, IMHO.

    Juergen
  6. JDO and co[ Go to top ]

    <I agree that the biggest issue of JDO is that it requires either byte code or source code manipulation>

    Why do people have such an issue with source code manipulation (aka code generation)?

    They happily use tools such as XDoclet, and Weblogic's ejbc which use exactly these techniques. If it's transparent why is it a problem?
  7. JDO and co[ Go to top ]

    <quote>
    Why do people have such an issue with source code manipulation (aka code generation)?
    </quote>

    Just to clarify: For me, it's not code manipulation per se but rather unnecessarily making it a requirement in a spec.

    In the case of JDO, this effectively prevents reflection-based toolkits from achieving full JDO compatibility. JDO requires binary compatibility such that persistent classes implement the PersistenceCapable interface at least in byte code, i.e. at runtime, to be able to switch the JDO implementation without needing to recompile the source code of the application. Reflection-based toolkits use unprocessed POJOs, so you cannot use their compiled persistent classes with JDO-based toolkits that expect the classes to implement PersistenceCapable.

    IMHO this requirement of binary compatibility went over the top. I wouldn't mind needing to throw a new postprocessor at my application classes when deploying for a different JDO implementation, if needed at all. Of course, this would not allow for choosing any JDO implementation for a prepackaged application but would rather require a choice at packaging time. This is less flexible in terms of deployment, but it would not lock out toolkits with other persistence strategies.

    And honestly, who really needs that deployment flexibility? If you need to deploy for 2 different JDO implementations, simply adapt your Ant scripts to support both. You will need to rewrite your O/R mapping files anyway, as JDO doesn't specify their format. So why not run a certain postprocessor too, if and only if the persistence toolkit of your choice requires it?

    Juergen
  8. Reading all these pros and cons about byte code compatibility as required feature of JDO, I am missing an important issue: The independency of the object model.

    If you are using just-in-time-reading of referenced objects (called indirection within TopLink, called lazy-loading within OJB), and you certainly will use this in most cases in order to avoid the instanciation of all associated objects, there should be no impact on your object model.

    The JDO spec and the enhancement process take care of this issue and makes it possible to design your objects only based upon the business requirements but not upon some techincal reasons (O/R-stuff).

    TopLink and OJB have several mechanisms to support just-in-time-reading but all of them require some modifications to the object model. For example there are ValueHolderInterfaces in TopLink which have to be used as reference types instead of the real type. That is the worst thing to your object model I can imagine. Another way is the usage of dynamic proxies (JDK feature), but here the drawback is, that each class which shall be proxied has to implement an interface. So you have to add additional interfaces to your object model.

    As long as non-JDO-O/R-mapping-tools rely on tampering with the object model in order to enable indispensable features like just-in-time-reading, I won't even think about using them. This is a striking advantage of JDO-compliant O/R-mapping-tools.
  9. noticed the last para:[ Go to top ]

    As long as non-JDO-O/R-mapping-tools rely on tampering with the object model in order to enable indispensable features like just-in-time-reading, I won't even think about using them. This is a striking advantage of JDO-compliant O/R-mapping-tools. <

    I *really* encourage you to check Hibernate. You will discover that it is actually less model-intrusive than most JDO implementations. And we certainly *do* have just-in-time fetching (or eager fetching with outerjoins, where _that_ is appropriate).

    That should be proof enough that the JDO spec doesn't need to go into these implementation details!
  10. quick question[ Go to top ]

    Why can't Hibernate support JDO?

    Peace,

    Cameron Purdy
    Tangosol, Inc.
    Coherence: Easily share live data across a cluster!
  11. quick question[ Go to top ]

    Why can't Hibernate support JDO? <

    Because I, personally, along with many other people including the other Hibernate developers, think the implementation details mandated by the specification are not the best of the approaches available. You are free to disagree. But I'm not going to expend all this effort building something (for free) that I don't believe is the right approach.

    If only the JDO spec would not contain implementation details :(

    Anyway, it is not up to Hibernate, or TopLink, or Cocobase to prove that JDO is a better O/R technology than these existing, mature, popular solutions. It is up to the JDO boosters to convince *us*.

    Until then, the Hibernate project will continue to innovate.... (judging by forum traffic, we have more users than the JDO vendors)

    =====================================================
    As an aside:

    My reading of the spec is that it doesn't provide for certain features that are essential for performance of an O/R mapper (due to JDO's OODB roots). In particular, from what I can tell, the query "language" won't let me retrieve multiple columns of objects and simple object properties in a single result set. Now, the section of the JDO spec that describes the query capabilities is extremely obscure and underspecified, so I may possibly be wrong here. (There are enough other things wrong with the query language that I don't really care to dig too deep.) I'm interested to know if someone here can confirm my interpretation....
  12. RE: quick question[ Go to top ]

    JDO absolutely lets you retrieve multiple columns and properties in a single load. That's the point of load groups.
  13. quick question[ Go to top ]

    <quote>
    Why can't Hibernate support JDO?
    </quote>

    I don't think that Hiberate should start changing the way it's been going so far by adding any kind of JDO support unless and until the JDO spec reinvents itself with it by changing orientation to use a more API-compatibility focussed approach. The current JDO spec is rather restrictive, and IMHO, Hibernate's doing well enough on its own.

    I've had some very good experiences with Hibernate. Getting a mapping file in place for a complex schema is time consuming. But once you have that in place, it works excellent. The only major problem I ever really ran into is in trying to use the Microsoft JDBC driver to connect to SQL Server. Not a good idea. Switching drivers and eventually databases (project decision not related to Hibernate) helped. And since then it's been a pretty good experience. The three best things I like about Hiberate:

    1) It's non-instrusive.
    2) It performs well (Note: a good understanding of all the mapping possibilities is very important if you are working with large schemas. A bit of experimentation can be key too.)
    3) It has excellent querying capabilities - the query language might be slightly verbose (MHO), but it works very well.

    And it's a fast moving project too - Hiberate 2.0 is in beta right now (when I last checked).

    Sandeep.
  14. Oops, another important point.[ Go to top ]

    <snippet>
    The three best things I like about Hiberate..
    </snippet>

    Um, slight correction. If you count the fact that its absolutely free (no developer licenses and no runtime licenses), that's, well, four things. ;-)

    Sandeep.
  15. I just wanted to clarify some points about TopLink's just-in-time reading (indirection) support. TopLink has always strived to be non-intrusive into the persistent object model and as of JDK 1.3 no TopLink code is required for just-in-time reading.

    Collection references can be implemented using the standard Collection (java.util.*) types. These relationships can be configured with or without indirection with NO changes to the object model.

    1:1 references can be implemented with proxy indirection. In this case the additional interface is required when indirection is needed to keep the persistence logic out of the model.

    Doug Clarke
    Oracle9iAS TopLink
    Product Manager
  16. Heiko: "As long as non-JDO-O/R-mapping-tools rely on tampering with the object model in order to enable indispensable features like just-in-time-reading, I won't even think about using them. This is a striking advantage of JDO-compliant O/R-mapping-tools."

    Exactly! The byte code interception way of detecting field changes is far more elegant (and faster) than model intrusive means. Byte code modification is going to be more and more common place and eventually all the fuss over it will go away. I have used BCEL very effectively and AspectJ is based completely on compile-time byte code manipulation.

    As a user, not a vendor, I really don't care if it takes the entrenched OR-mapping vendors some retooling to come into compliance with the spec. Anyway, I can choose from numerous off the shelf JDO's that are already completely compliant. As a *user* I really like the way the JDO spec is designed.
  17. JDO and co[ Go to top ]

    Juergen,

    Bear in mind that there is always an enhancement step, for all persistence frameworks. JDO is unique in that it automates this step. Reflection-based frameworks leave this step up to the developer, who must expose persistent fields either directly as public fields or via public setters and getters.

    So, one of the major (and least-discussed) advantages of JDO's enhancement requirements is that encapsulation of your data model is possible in JDO, but typically impossible without some serious custom coding in reflection-based frameworks.

    -Patrick

    --
    Patrick Linskey
    SolarMetric Inc.
  18. Excellent interview[ Go to top ]

    This relates to the comparison between .net and j2ee. I think it is obvious that Dennis has a complete lack of knowledge .Net and its amazing architecture. I think this is typical in terms of what is happening in the j2ee camp. Making grossly inaccurate statements about other technologies they have absolutely no knowledge about. It is quite embarassing!
  19. Excellent interview[ Go to top ]

    Hi,

    <quote>
    This relates to the comparison between .net and j2ee. I think it is obvious that Dennis has a complete lack of knowledge .Net and its amazing architecture. I think this is typical in terms of what is happening in the j2ee camp. Making grossly inaccurate statements about other technologies they have absolutely no knowledge about. It is quite embarassing!
    </quote>

    I am a consultant who works with both .Net and J2EE technologies, and let me tell you that Dennis is pretty accurate about his statements.

    (FACT) .Net's native support for persistence primarily revolves around ADO, and with .Net, has some interesting features like like typed Datasets and excellent XML integration.

    I also like the fact that the basic data access framework (ADO .Net is pretty much the equivalent of JDBC) supports in-memory, programmatic, relationship modeling. Perhaps JDBC can learn from .Net in this regard.

    (FACT) Objectspaces, Microsoft's attempt at an O/R framework, is a pretty immature offering as of its current state - and it has a long way to go before it will begin to appeal to enterprise-level developers. It has very basic support for one-to-one and one-to-many mappings, and lacks optimization in the plumbing (I've taken a look at the plumbing so I know). It fits in the same space as JDO and frameworks like Hibernate, but is still lacking in features that will make it viable for real world use.

    So, Dennis Leung is pretty accurate with respect to the few details that he spoke of with respect to the Microsoft and .Net world. If you want good O/R related capabilities in the .Net world, you're pretty much out of luck.

    There are some O/R frameworks (open source and otherwise) that are emerging today in the .Net space. However, none of them are still quite "there" yet.

    As someone who has written an .Net-based (100% C#), aspect-oriented persistence framework using interception, dynamic proxies and runtime interface implementation, let me tell you that there is nothing in the .Net space today that can match competing persistence technologies on the J2EE side.

    (FACT) Speaking of the J2EE side, take a look at some of the mature commercially available persistence frameworks out there like Cocobase or Toplink, or even the open source ones like Hibernate, XORM, OJB, Castor etc (you lose count very fast).

    And it should soon become obvious that the J2EE development community, far from talking through its hat, has expectations that (FACT) .Net simply CANNOT fulfil in its current state of maturity today.

    A year, or two from now? Who knows? Anything could happen. I agree that .Net has some interesting functionality in it not related to this specific topic. For instance, I think that ASP.Net is a well designed web application programming model.

    But in the persistence world, the Java world rules.

    Sandeep
  20. amazing technology?[ Go to top ]

    Mackie: "I think it is obvious that Dennis has a complete lack of knowledge .Net and its amazing architecture. ... Making grossly inaccurate statements about other technologies"

    .NET is not a technology, any more than Excel is a technology or Word is a technology or Windows is a technology. .NET is a product. It is the property of Microsoft, just like Windows and Word and Excel and the proprietary products of Microsoft.

    Java and J2EE are not technologies either, lest you feel persecuted. However, to continuously compare Java and .NET is really annoying to those of us who left the single-vendor proprietary mindset years ago.

    As for .NET's amazing architecture, it might be amazing to those developers that have been stuck in that single-vendor proprietary market, since it has generally lagged 5-6 years behind the state of the art, but if it makes you feel better, then yes, by all means, a garbage-collected safe execution environment with a friendly "C++-like" language is certainly a good step forward architecturally from vbrun.dll.

    Peace,

    Cameron Purdy
    Tangosol, Inc.
    Coherence: Easily share live data across a cluster!
  21. I am always skeptical of a large company with a proprietary product claiming that they will basically commoditize their products, reduce their revenue stream, and support a spec if the spec only did X, Y, and Z, especially when that vendor has voting rights on deciding if the spec is passed or not.

    I have a number of questions that I am hopeful that the appropriate people can answer:

    1) Isn't Oracle part of the JDO spec team at some point in its existence? If so, why weren't these issues brought up sooner? Why were they only brought up once the Toplink purchase was made?

    2) Wasn't Toplink (Webgain at the time) and Cocobase invited to join the JDO spec team? Even if they weren't invited, why didn't they get involved, join the team, and influence the direction of the specification vs. bashing it after the fact.

    3) Finally, why haven't IBM and BEA jumped into the fray with their own persistence framework - something useable, not entity beans? Oracle's purchase of Toplink (wasn't BEA an investor in toplink and recommending Toplink just a year ago to their app server users?) has got to have somebody in the Web Logic/Sphere camps nervous. Isn't Oracle with their pricing of Toplink (~10k per CPU or free if you own Oracle A/S) using persistence as a differentiator in the app server market? My belief is that it is only a matter of time before somebody else in the data access space gets bought - maybe Cocobase but maybe it'll be a JDO vendor.

    4) I may be 100% jaded with big companies but I see most of this discussion as an attempt to maintain vendor lock in by spreading FUD. The only comments I have taken seriously are the ones from the Hibernate folks because they have less to lose than a commercial proprietary vendor. Wouldn't you do everything in your power to maintain vendor lock-in and avoid supporting the spec if you could charge $5-10K per CPU when the most expensive JDO vendors are charging $3-5K per developer license?

    5) To the Server Side folks, you guys have always had gumption. When are we going to see a "Independent Evaluation of Persistence Frameworks" between all of the persistence solutions? This rice benchmark thing sounds interesting but seems like it is still in the early planning phases at best and somehow seems like it will be tilted in favor of the JDO vendors since the JDO vendors are sponsoring it (can someone confirm that?). This seems like an ideal opportuntiy for the server side to raise some dander... maybe with the JDO folks, maybe with Oracle, maybe with Hibernate, or maybe with one of the proprietary vendors. There are enough colorful players out there that this seems like a battleground that the Server Side is famous for being in the middle of (and of course claiming Switzerland all along).
  22. One last question[ Go to top ]

    6) I'm not sure I am 100% comfortable with all of this byte code manipulation stuff (it seems useful and is hard to argue with "that's what javac does anyways" but it still makes me nervous) but isn't JBoss moving to doing a bunch of bytecode stuff? I saw a recent summary of Fleury's presentations he's been doing all over the place and that is what it sounded like to me.
  23. byte code stuff[ Go to top ]

    6) I'm not sure I am 100% comfortable with all of this byte code manipulation stuff (it seems useful and is hard to argue with "that's what javac does anyways" but it still makes me nervous) ...

    I've worked on two different projects:

    1. XML parser and modifier and builder
    2. ClassFile (including byte code) parser and modifier and builder

    The ClassFile project was much easier to write, more easily verifiable (testable from a provability standpoint), and the specifications from which to work were much easier. That makes byte code assembly, disassembly, modification, mangling, rummaging, humping, whatever you want to call it, a pretty known science well within the capability of any product, particularly using a library (e.g. BCEL) that does all the hard work for you.

    The frameworks that we've traditionally used have all been reflection-based, but byte code mugging or munging or whatever is hardly scary (IMHO).

    Peace,

    Cameron Purdy
    Tangosol, Inc.
    Coherence: Easily share live data across a cluster!
  24. RIce benchmark[ Go to top ]

    Hi Chester

    "This rice benchmark thing sounds interesting but seems like it is still in the early planning phases at best and somehow seems like it will be tilted in favor of the JDO vendors since the JDO vendors are sponsoring it (can someone confirm that?"

    The proposal being discussed will be sponsored by JDO vendors if it goes ahead. The actual benchmarking will be done independently after all participating vendors have had a chance to optimize for it. Someone has to pay for the work :)

    You are spot on with the FUD comments!

    Cheers
    David

    PS: I work for Hemisphere Technologies. We make JDO Genie, a high performance JDO implementation for relational databases.
  25. loaded questions[ Go to top ]

    What was up with the thinly veiled JDO bashing? Talk about loaded questions! Asking the vendor of a proprietary OR-mapping framework what the problems with JDO are ...come on. Anyone who saw the rants against JDO from the entrenched OR-mapping vendors knows that a unified API for object persistence scares the pants off these vendors. If they implemented the JDO spec their customers could switch vendors or, God forbid, choose an open source implementation.

    If I were a JDO vendor I would do everything I could to make bytecode enhancement sound like the devil itself. The bytecode enhancement that JDO performs is absolutely trivial, but certainly a much smarter and more transparent method than reflection based schemes. Byte code enhancement makes lots of sense, and it is exactly the binary compatibility requirement of the JDO spec that scares the OR-mapping vendors. I'll bet that if the JDO spec had not specified binary compatibility, all the OR-mapping vendors would be climbing over each other to implement JDO, because it would not threaten the vendor lock in that they achieve.

    Kudos to Craig Russel for having the cajones to put together such a kick-ass spec as JDO. And, kudos to Craig for creating a spec that caters to the needs of the developers, not the entrenched vendors.

    As for the loaded question of why hasn't JDO been widely adopted ... It's been what, a whole 3 months now since the spec got out of the draft stage.
  26. loaded questions[ Go to top ]

    <quote>
    As for the loaded question of why hasn't JDO been widely adopted ... It's been what, a whole 3 months now since the spec got out of the draft stage.
    </quote>

    And the fact that Sun seems to not want to support it. Hopefully they do not try and stomp it out as a technology.

    That would be bad.

    Sandeep.
  27. TopLink[ Go to top ]

    This was not .Net bashing, He is telling it like it is.
    Don't mistake discussing a .Net weakness as bashing, thats the problem .Net isn't a platform its a product and you can't have a platform vs product discussion without the accusations of bashing.

    As far as JDO bashing, I don't see that either... If JDO is such a great technology, don't you think that JDO central site should maybe use it? (instead of php based site)?
    I don't want to dismiss JDO, I just want to see (and hear about) some real world success stories. If the vendors don't get behind it then it needs really strong open source solutions. (Or make it manditory in a future J2ee Spec)
    As far as Open Source JDO, I hear bad things about Castor (and thats not even real JDO) that just leaves OJB? Anyone have good/bad stories about that?

    -jp
  28. JDO success stories[ Go to top ]

    A few months back I was evaluating a JDO implementation for an upcoming project at work. As part of that evaluation I asked the JDO vendor for customer references. I had exchanged emails with one of the vendor's customers who had been using the product in it's pre-jdo compliant version (since the spec had not yet been finalized). Naturally (as you might expect from a customer reference), the client was happy with the product. What I did find interesting from the exchange was that the customer had never even bothered with entity beans and had gone with JDO from the start. So one of the things that I hope will come out of these postings is that there are in fact numerous mature JDO implementations, and at least a few real deployments. I don't think a whole lot more can be expected from a 3 month old spec. If any of the vendors of JDO are reading this (and I'm sure you are!), this would seem to be your chance to speak up.
  29. JDO success stories[ Go to top ]

    JDO isn't 3 months old, look at the JCP page
    (http://www.jcp.org/en/jsr/detail?id=12):

      Maintenance Draft Review - 03 Mar, 2003
      Final Release - 30 Apr, 2002
      Final Approval Ballot - 25 Mar, 2002
      Proposed Final Draft - 10 May, 2001
      Public Review - 06 Jul, 2000

    First, look at how long it was in Public Review state. And the official Final Release was 9 months ago.

    IMHO, we don't need JDO 1.0 success stories, just as we don't need EJB CMP ones. A flawed spec doesn't get better with "success stories". Anyway, I expect more Hibernate real world usage than JDO...

    What we need is a JDO 2.0 spec that is reworked from the ground up. I think Gavin has put forth the definitive arguments. If Craig Russell and the JDO spec group chooses to ignore them, then I expect JDO to end up on the garbage dump of flawed specs. And we really don't need any more of those!

    Juergen
  30. JDO success[ Go to top ]

    We are currently using JDO (Solarmetric's Kodo to be precise) for a significant J2EE development, and find it transparent, productive, performant and well supported.

    Tom
  31. "What was up with the thinly veiled JDO bashing?"

       Geoff, I asked Dennis that question because it is important to hear the perspective of an O/R vendor on the spec. JDO *needs* the support of the o/r mapper space or it will be doomed to ODBMS obscurity. If the O/R guys have a beef with it, we should hear what that beef is and try to accomodate the spec so that the spec itself will succeed.

      Censorship serves no one.

      Dennis' response definitly got me thinking about byte code manipulation.. I now question what that value is in being able to take a JDO-enabled app class binary and just 'run it' in a diff. vendors product. That seems to be the reason why they required the bytecode stuff, so that all the diff. jdo vendors could read the same classes. Is this some remnant dream from the early days of EJB when Sun cared about creating reusable business components?

       To require byte code compilation just to enable this class reuse sounds unpractical. I mean, no matter what, if you take some code and want to run it in a different vendors product, you will HAVE to go through a compile/config cycle anyway, just like with J2EE apps. So the idea of having reusable binaries is not useful in the real world. What is important is reusable/compatible source code.

      So why doesn't JDO just standardize on what's really important, which is source code compatibility and specifiying all the runtime semantics (like lifecycle, caching, etc). Let the vendors worry about how they actually make the JDO source code persist.

      I think such an approach would have helped JDO get a lot farther than it has.

    Floyd
  32. Floyd,

    It's nice to see that we share our view of bytecode manipulation! As I already elaborated above, I can't see the value of bytecode compatibility if you need to rewrite the whole O/R mapping config files anyway... There's simply no real world value in that. Recompiling is a breeze, converting O/R mappings a hassle.

    And Craig Russell, well, I think he prioritized inappropriately. Focussing on the bytecode stuff instead of addressing O/R mapping specifics missed the actual requirements of application developers. CocoBase's Ward Mullins may rant a lot, but he has valid arguments backing his views. Why should toolkit vendors be forced into a certain persistence mechanism, especially for existing products? There isn't any technical need for it...

    Juergen
  33. JDO 2.0, and politics[ Go to top ]

    I think JDO 2.0 will be make or break. They need to listen to everyones concerns, shape up, and fix it. The spec is still young, and if they can put politics behind them, JDO could be a great thing for us all.

    Hopefully, buy wary,

    Dion
  34. Dirty-checking vs Enhancement[ Go to top ]

    <quote>
    Why do people have such an issue with source code manipulation (aka code generation)?
    </quote>


    There are two basic problems with this approach
    compared to reflection based dirty-checking.

    (1) It *forces* an extra build step that you *may*
        not want

    (2) It makes it very difficult to support the kind
        of fine-grained persistence that Hibernate
        implements (extensible types)

    I'm only going to discuss (2), because its less well
    understood. With dirty-checking, I or an application
    can add a new Type class that maps *any* mutable or
    immutable class or primitive to any database column(s).
    With bytecode enhancement, I have to actually enhance
    _every mutable type_, not just first class objects,
    but also mutable second class objects!

    Example: JDO implementations need to enhance
    java.util.Date if they want to support use of
    Date.setTime(). (Of course, Date should be an immutable
    class in the JDK, but thats another story....)

    An even worse problem is the case of arrays. The JDO
    spec states quite explicitly that changes made to an
    array might not get persisted. My assumption is that
    thats because they realise you can't "enhance" an
    array class without an actual JVM change. Dirty
    checking persistence layers have no problems handling
    arrays.

    So *really* transparent persistence turns out to be
    a bit of a pipe dream with these kind of approaches,
    since enhancement doesn't scale very well to a
    large number of types and is actually impossible for
    some types.

    By contrast, Hibernate gives you at least two different
    options for persisting MonetoryAmount (which is a
    mutable user-written class consisting of a BigDecimal
    and a Currency). In Hibernate, Monetory amount could be
    modelled as a component, or as a UserType. Neither
    requires any infringement upon the MonetoryAmount class
    itself.

    So what could we do to fix the JDO spec? Simple: extract
    all the APIs and user-visible stuff into a new
    specification and leave implementation details like
    bytecode/reflection/code generation/whatever to the
    implementation. This is exactly how the EJB and Servlet
    specs were designed and is a good part of the reason why
    you see many different mature implementations of those
    specs.

    That way, Hibernate and all the other excellent Java
    O/R tools can all be a JDO implementations without
    compomising their underlying designs.

    peace...

    Gavin
  35. Dirty-checking vs Enhancement[ Go to top ]

    <quote>
    Extract all the APIs and user-visible stuff into a new
    specification and leave implementation details like
    bytecode/reflection/code generation/whatever to the
    implementation.
    </quote>

    That is exactly what the 2.0 spec ought to do. Recommendations for specific implementation approaches are OK in a spec. Prescriptions are not.

    Sandeep.
  36. Dirty-checking vs Enhancement[ Go to top ]

    Hi All

    "(1) It *forces* an extra build step that you *may*
        not want"

    I cannot see that this is an issue. Any non-trivial project is going to have lots of steps in the build process (xdoclet etc). Adding a line to your compile target to enhance after compilation is not a big deal. Enhancement only takes a few seconds even for lots of classes.

    Would people feel the same way about it if JDO enhancement was supported as an option to javac? It is just an extension of the compile process. Instead of a javac option you have an extra line in the build file.

    "(2) It makes it very difficult to support the kind
        of fine-grained persistence that Hibernate
        implements (extensible types)"

    JDO vendors are free to extend the enhancment process to add support for new types. The spec requirement is that the implementation must work with vanilla enhanced classes. If a vendor provides some "outside the spec" feature like new types then they just require use of their enhancer if you use the feature.

    "Dirty checking persistence layers have no problems handling
    arrays."

    This is true. However you can use JDOHelper.makeDirty to indicate that the field is dirty. Most people will use Collection's anyway.

    Enhancement has important advantages over reflection based graph diffing:

    1) Any field can be "lazy loaded".

    With reflection based approaches the whole graph has to be loaded at once or you have to get into really messy stuff with "lazy loading proxies". JDO implementations can load the graph bit by bit as you navigate it.

    Consider an example of a class with several collection fields that are only used to implement a few use-cases. They are expensive to fetch every time an instance is retrieved. With JDO they are transparently fetched only when touched in code.

    Consider the case of a large String field mapped to a CLOB or TEXT column. You may only need this field in a few places in your app. You can set default-fetch-group=false and it will only be loaded when touched. The only other way to achieve this is to use "lazy loading proxies" everywhere. Yuk!

    2) Instances survive transactions.

    With JDO you can query an instance in a transaction and reference it even after commit or rollback. You can continue using the same instance in future transactions.

    Reflection based approaches typically require you to discard any instances after commit or rollback and requery them.

    ---

    These two things make JDO much more transparent even if it is harder to support new types!

    Cheers
    David
  37. Lazy Loading[ Go to top ]

    TopLink and OJB have several mechanisms to support just-in-time-reading but all of them require some modifications to the object model. For example there are ValueHolderInterfaces in TopLink which have to be used as reference types instead of the real type. That is the worst thing to your object model I can imagine. Another way is the usage of dynamic proxies (JDK feature), but here the drawback is, that each class which shall be proxied has to implement an interface. So you have to add additional interfaces to your object model. <

    There is a third optiona that is almost *completely* transparent: runtime bytecode generation via CGLIB (or even bcel).

    Hibernate uses this to implement proxies w/o interfaces (and certainly no ValueHolders). Check it out!
  38. fetch groups[ Go to top ]

    1) Any field can be "lazy loaded".


    With reflection based approaches the whole graph has to be loaded at once or you have to get into really messy stuff with "lazy loading proxies". JDO implementations can load the graph bit by bit as you navigate it. <<

    No, reflection based approaches can have proxies, too. (See my comment above). As for fetching a column at a time, that is _usually_ a very *bad* thing, at least for an O/R mapper (requires multiple round trips to the same database row).

    There is a very standard pattern for cases where you usually only want some columns, but sometimes need more. Check out the Hibernate Wiki web which discusses this pattern (Lightweight Class) in depth.

    http://hibernate.bluemars.net/41.html

    >> Consider an example of a class with several collection fields that are only used to implement a few use-cases. They are expensive to fetch every time an instance is retrieved. With JDO they are transparently fetched only when touched in code. <<

    No decent O/R mapper should work this way. (And none of the ones mentioned above do.)

    >> Consider the case of a large String field mapped to a CLOB or TEXT column. You may only need this field in a few places in your app. You can set default-fetch-group=false and it will only be loaded when touched. The only other way to achieve this is to use "lazy loading proxies" everywhere. Yuk! <<

    Yes, fetch groups are nice sugar, but the pattern I mentioned above allows even better performance.

    However, I agree that fetch groups are harder to do with reflective approaches. I do not agree that they are necessary to achieve acceptable performance.

    >> 2) Instances survive transactions.

    With JDO you can query an instance in a transaction and reference it even after commit or rollback. You can continue using the same instance in future transactions. <<

    You can do that in Hibernate too (and OJB as far as I know). The current implementation in Hibernate lacks some of the sugar in the JDO spec, mainly because no-one has ever needed or asked for it.


    Anyway, I'm not trying to argue that one approach is better for all cases (they both have pros and cons). All I would like to see is the flexibility to innovate in the JDO spec....
  39. Various points of different criticisms[ Go to top ]

    <quote>But the problem is, 90% of the time, you are working with an existing database schema,</quote>

    Most RDMS JDO implemtations do provide a simple way to map to existing schemas (and at least 2 do automatically)

    I think its funny these EJB and TopLink people have so much bandwidth spent on teaching people to a) how to do anything, b) how to avoid serious performance degradation (how many patterns have you learned to go from painfully slow, to just tolerably slow?), and c) denouncing a spec that should seamlessly fit into their product (esp. when TopLink had a failed attempt to port their application to JDO).

    <quote>JDO implementations need to enhance
    java.util.Date if they want to support use of
    Date.setTime() ... leave implementation details like
    bytecode/reflection/code generation/whatever to the
    implementation. This is exactly how the EJB and Servlet
    specs were designed</quote>

    JDO is simple, fast, powerful and scalable. And instead of bashing specs incorrectly (e.g. you do not need to enhance java.util.Date), perhaps people should try it and find out how much better a spec designed from the ground up, instead of hacked multiple times like CMP, can be.

    The devils in the details... Most of us can attest to the time spent from porting from Weblogic CMP, to Sun CMP, to TopLink to [insert proprietary technology here] to handle "implementation details."
  40. Incorrect?[ Go to top ]

    e.g. you do not need to enhance java.util.Date <

    Ummm either

    (1) you need to enhance it
    (2) you need to subclass it

    if you want to detect a call to setTime()

    Or do you have some other suggestion?
  41. plus...[ Go to top ]

    And instead of bashing specs incorrectly <

    P.S. You used a typical fallacious argument technique here. The thrust of my argument was directed toward user-written classes (which may not be fully known at compile time if they are an interface).

    Instead you picked on a single aside in my argument without attempting to refute the main thrust.

    Suppose I have a property of type Serializable, stored as a blob in the database.

    How does JDO approach propose to detect a change to that property. Ditty an array.

    If you wich to address the correctness of my arguments, please address the main thrust of my arguments.
  42. plus...[ Go to top ]

    """Suppose I have a property of type Serializable, stored as a blob in the database.

    How does JDO approach propose to detect a change to that property. Ditty an array."""

    It doesn't. In the case of non-persistence-capable classes (like a generic unenhanced Serializable or an array), you have to explicitly tell the JDO implementation that the field holding the object has changed if you mutate the Serializable/array directly (in contrast to assigning the field a new Serializable/array value, which the implementation would, in fact, detect).

    JDO's transparency isn't perfect. But neither is anything else without JVM hooks. In real-world usage, JDO covers 99% (literally) of what you need to do completely transparently. The array issue is it's biggest weakness, IMO. But you can't subclass arrays to detect changes, and performance would be a major concern if you tried to detect changes by doing equality comparisons on array elements at commit time.

    BTW, on the Date issue you raised: JDO implementations will transparently use a Date subclass that detects calls to setTime. Note that I said transparently: your code still uses java.util.Date. It does the same thing with collection and map types: you still use java.util.*. This approach allows transparency plus instantaneous change tracking without having to do equality comparisons on commit.

    I'll just echo my previous post and encourage people to try JDO out. Don't get bogged down in details until you see what it's like to use it.
  43. plus...[ Go to top ]

    as mentioned clearly in the subject, I was simply responding to various points made in different threads and not to a single post.

    To argue your "main thrust," the spec doesn't limit what JDO implementations can support. In fact, I think multiple vendors support custom mappings and dirtying (Kodo and Lido). And those types do -not- have to be enhanced either unless they are first class objects.
  44. JDO: Try it![ Go to top ]

    """perhaps people should try it and find out how much better a spec designed from the ground up, instead of hacked multiple times like CMP, can be."""

    I'd like to echo this sentiment. As a JDO vendor, I can tell you that once developers try JDO, they're almost always hooked on it. JDO is extremely user-friendly, but at the same time there are implementations that are powerful and (yes) mature enough to go head-to-head with proprietary O/R tool vendors on features.

    The fact is that the the established players in persistence all have significant resources invested in either CMP or their proprietary tools. Why would they bother implementing JDO when they can spread FUD so much more easily, and, judging by posts here and on other forums, so effectively?

    But don't take my word for it; just try JDO. Because of its transparency and ease-of-use, you can get a real feel for it in just a couple of hours. Download Kodo (which has a quick tutorial you can take) or one of the other major implementations listed on JDOCentral.com. If JDO's detractors would actually try it, at least they'd have some real ammo for their rants against it :)
  45. Juergen,

    I'm a bit confused by your statement:

    <quote>CocoBase's Ward Mullins may rant a lot, but he has valid arguments backing his views. Why should toolkit vendors be forced into a certain persistence mechanism, especially for existing products? There isn't any technical need for it...</quote>

    My interpretation of this is that you are concerned that vendors of non-JDO O/R mapping tools should not be forced to conform to a standard. This seems like something of a truism. To be JDO-compliant, one must, by definition, conform to the JDO APIs.

    Am I misunderstanding your comment?

    -Patrick

    --
    Patrick Linskey
    SolarMetric Inc.
  46. Patrick,

    What I meant was: There isn't any technical need for forcing a certain modification detection mechanism like bytecode manipulation, or reflection for that matter.

    Shouldn't it be possible to specify JDO as a persistence API with appropriate semantics but without implementation details? That would allow for both bytecode-manipulation-based and reflection-based toolkits being able to comply to JDO without sacrificing their respective approach.

    Unfortunately, I doubt that the JDO expert group will move in that direction. I even assume that it would be hard to redefine JDO that way while keeping 1.0 compatibility, as parts of the API and some semantics are currently tied to the bytecode manipulation approach.

    Juergen
  47. AccessibleObject.setAccessible()[ Go to top ]

    Reflection-based frameworks leave this step up to the developer, who must expose persistent fields either directly as public fields or via public setters and getters.<

    Excuse me???!!! Have you noticed the setAccessible() method on AccessibleObject. Basic knowledge of Java APIs on the part of persistence layer implementors is usually assumed....

    The reflection API, of course, allows access to ANY method or field of ANY class (subject to security checks, of course).

    >> So, one of the major (and least-discussed) advantages of JDO's enhancement requirements is that encapsulation of your data model is possible in JDO, but typically impossible without some serious custom coding in reflection-based frameworks. <
    Absolutely incorrect. Sounds like you have never ever investigated the reflection-based approaches (and have a very shallow knowledge of the reflection API). Please take some time to look at Castor, OJB or Hibernate. Their sourcecode is publicly available.
  48. more .....[ Go to top ]

    Shouldn't it be possible to specify JDO as a persistence API with appropriate semantics but without implementation details? <


    Why do all the JDO boosters keep avoiding this *simple* question?!

    Surely it is possible, right? Its possible for CORBA ... EJB ... JMX ... JNDI ... JTA ... JDBC ... whats so special about JDO??

    Then if bytecode modification is SO superior, as is claimed, then users and implementors will all choose this superior technology, correct? Benchmarks and the experiences of users will demostrate the superiority of the this technology and eventually it will be adopted universally...

    Or is it that, on a level playing field, bytecode enhancement would NOT actually win the competition?


    Sorry ... I know I've gone into polemic mode, which I meant to avoid, but I simply don't understand why the notion that specifications should not include implementation details is not completely uncontreversial.
  49. RE: more .....[ Go to top ]

    I've made this point before and I will make it again. As a *DEVELOPER* not a *VENDOR* I simply don't care if reflection based toolkits can't comply with the JDO spec. I have plenty of compliant JDO's that I can choose from.

    The JDO spec put a stake in the ground and said, "this is what JDO is" by specifying the bytecode approach. It is what it is, and apparently it does not easily include reflection based toolkits.

    Furthermore, as a developer, I like the fact that the JDO bytecode approach never intrudes into my domain object model. I don't have to extend any special classes or make any fields public, etc.

    Frankly, I think the JDO spec is a breath of fresh air. Instead of trying to be all things to all people, JDO standardized on a very smart, elegant way of specifying *part* of the solution. To have made an all-inclusive API would have watered down the elegance of the design.
  50. RE: more .....[ Go to top ]

    I've made this point before and I will make it again. As

    > a *DEVELOPER* not a *VENDOR* I simply don't care if
    > reflection based toolkits can't comply with the JDO spec.
    > I have plenty of compliant JDO's that I can choose from.

    Do performance testing - you'll care. Especially if you need to do any distributed operations. This approach doesn't scale for a variety of reasons, and it places way to much weight in the client VM, or way to much network traffic if you don't place the intelligence in the client. In distributed environments, the change callback system of JDO is frankly unusable based on my analysis of it.

    You may not care now, but try to build a sufficiently complex project, and you will..

    Ward Mullins
    CTO
    THOUGHT Inc.
    http://www.thoughtinc.com
  51. RE: more .....[ Go to top ]

    I'm missing your point on how the JDO approach doesn't scale. I've made the point in earlier postings that the bytecode enhancement just intercepts field manipulation and delegates to vendor provided classes. Those classes could be lightweight proxies to a central server, or they could be distributed write through transactional caches, or they could be read-only invalidating caches, etc. etc. JDO in no way places limitations on the architecture that would prevent scalability. So I disagree that JDO places too much weight on the client for the system to be implemented in a scalable fashion..

    In fact, performance testing is exactly what I have proposed. I have suggested that some of the JDO vendors get together and reimplement the Rice University bidding benchmark, and apparently that process is already underway.
    Let's wait and see. We'll let the numbers speak for themselves.
  52. Complete rubbish[ Go to top ]

    Hi Ward,

    I'm sorry, but what you have just said sounds like complete rubbish to me bordering on spreading of FUD.

    Can you explain why "this approach" doesn't scale?

    And what "change callback" system is frankly unusable?

    Cheers
     - Keiron
  53. RE: change callback system[ Go to top ]

    <quote>In distributed environments, the change callback system of JDO is frankly unusable based on my analysis of it.</quote>

    Can you share that analysis? The change callback system is precisely what makes JDO a higher-performance architecture than a reflection-based system. Read 100,000 objects into a transaction, change a single field of a single object, and then commit: any reflection-based "transparent" persistence system will need to do 100,000 comparisons of each of the object's fields. JDO won't need to do any. How does that make JDO unusable?

    --
    Marc Prud'hommeaux
    SolarMetric, Inc.
  54. RE: change callback system[ Go to top ]

    Hi,
    >Read 100,000 objects into a transaction, change a single >field of a single object, and then commit: any reflection->based "transparent" persistence system will need to do >100,000 comparisons of each of the object's fields. JDO >won't need to do any. How does that make JDO unusable?
    Agree, it is not very "fast" reflection-based persistence system, but it will not be very fast in the "best" JDO implementations too.
    It is not realistic use case, doe's somebody needs this in JDO application ? I think it possible to find better ways
    to solve performance problems like this.
  55. RE: change callback system[ Go to top ]

    Not sure if you are suggesting that JDO has the same problem (that is, having to 100,00 comparisons). JDO does not have this problem. With JDO the StateManager immediately knows when a single field of an object is read or written. This is what the bytecode enhancement does. There is a single JVM instruction for accessing the field, and the JDO enhancer replaces this field access with bytecodes for delegation to the StateManager. It's really very simple and efficient.
  56. RE: change callback system[ Go to top ]

    Yes,
    It is good way to detect changes on object, but the ways
    used by reflection based frameworks are not bad too.
    I do not need to load 10000 of objects to change single field, I prefer not to load objects I do not need.
    Reflection based frameworks detect changes at commit time,
    they have all data needed to detect it and it is not the main owerhead for persistence.
  57. RE: change callback system[ Go to top ]

    Hi,

    > I do not need to load 10000 of objects to change single
    > field, I prefer not to load objects I do not need.
    > Reflection based frameworks detect changes at commit
    > time, they have all data needed to detect it and it is
    > not the main owerhead for persistence.

    In my experience its quite common to load maybe 1000 objects, perform some calculation and then then update a handful of result objects.

    I need to load these other objects to be able to determine what to update. Surely reflection-based systems don't check every field of every object that as been loaded to see if its changed???

    Cheers
     - Keiron
  58. Dirty Checking[ Go to top ]

    I need to load these other objects to be able to determine what to update. Surely reflection-based systems don't check every field of every object that as been loaded to see if its changed??? <

    One common misconception of people who have never actually done any performance profiling of this is that it would be some kind of huge performance overhead.

    I have done *heaps* of profiling with and without JProbe and determined that this is simply not an issue.

    Think about it:

    You just loaded 10 000 objects!! Did you consider how much work there *is* in that, on the part of the database, JDBC driver, object instantiations, setting fields, resolving object identity, etc?

    (Actually by far the overwhelming amount of work in all this is the work done by the database and JDBC driver.)

    Dirty checking the fields afterward is no big deal. As long as you use a shortcircuit for the case of ==, you are perfectly fine. Dirty checking the fields takes about two orders of magnitude less work than initializing them in the first place!

    Try it out; run a JDBC query and read all the columns of your 10 000 rows. Then write some Hibernate code that queries, returns and dirty checks 10 000 objects. The difference will be < 5%.

    So its simply not an issue.
  59. RE: change callback system[ Go to top ]

    In my experience its quite common to load maybe 1000

    >objects, perform some calculation and then then update a
    >handful of result objects

    Ok, it is more realictic use case :) Try the same with hibernate and with some JDO implementation.
    I trust technologies proved in realistic use cases.
    I like JDO API, it is a very nice experiment and I do not think byte code manipulation is some kind of problem.
    If Hibernate authors and users will decide to support JDO,
    I promisse to help implement this kind of enhancer on some of my weekends.
    But I want to see realistic, open JDO RI implementation and tests, before to trust "standard". It is possible to write very good JDBC code without XML and generated grabage too.
  60. RE: change callback system[ Go to top ]

    citate from specification:
    "&#8226; Values of fields can be read and written directly without wrapping code with
    accessors or mutators (field1 += 13 is allowed, instead of requiring the user
    to code setField1(getField1() + 13))"

    How JDO enhancement can help to detect changes for this use case ?
  61. RE: change callback system[ Go to top ]

    Ok,I found answer in specification myself.
    Enhancer replaces gefield/sefield instructions with jdoGetXXX/jdoSetXXX methods,
    transforms all public fields to private.
    JVM throws IllegalAccessError for "enhanced" public fields.
    Interesting workaraund,
    thank your for this Transparent Persistence !!!
  62. RE: more .....[ Go to top ]

    <Geoff>
     I've made this point before and I will make it again. As a *DEVELOPER* not a *VENDOR* I simply don't care if reflection based toolkits can't comply with the JDO spec. I have plenty of compliant JDO's that I can choose from.

    Frankly, I think the JDO spec is a breath of fresh air. Instead of trying to be all things to all people, JDO standardized on a very smart, elegant way of specifying *part* of the solution. To have made an all-inclusive API would have watered down the elegance of the design.

     ...and one more thing: Will all the anti-JDO folks just identify up front what O/R mapping vendor you work for. Your postings are giving the impression that application developers are unhappy with the JDO API, which I don't think is the case.
    </Geoff>

      Have you tried JDO yourself.
      We've tried and now I can tell you the things that we didn't like:
    1) most JDO implentations are quite immature now;
    2) byte-code manipulation makes model objects too heavyweight to pass them during remote invocations. Using reflections would allow us to use the same objects both on client and server;
    3) as the O/R mapping behaviour is not included in specs it's hard to switch implementation. Most of the O/R stuff is vendor-dependent. The common part is quite small;
    4) personally I don't think that API is so elegant: the classes are pretty large (especially javax.jdo.PersistenceManager and javax.jdo.JDOHelper). Also, JDOHelper duplicates some methods from PersistenceManager and is a static utility class that makes some things problematic. I understand the need for such duplication but I think that it could be somewhat redesigned;
    5) some important things for O/R mapping are underimplemented (like locking behaviour). For instance, Hibernate allows you to specify lock mode during each select so that objects will be (or won't be) selected in "FOR UPDATE" mode. In JDO you only define transaction mode (Pessimistic or Optimistic) and all the queries issued in transaction use (or don't use) 'FOR UPDATE' statement. This leads to whole tables being locked sometimes.
      Some vendors (Solarmetric, for instance) gives pretty good support and that saves us for now.
      As for byte-code manipulation I think that it's a step back. And including such requirement into the spec was the most arguable decision.
  63. RE: more .....[ Go to top ]

    Alex:" 2) byte-code manipulation makes model objects too heavyweight to pass them during remote invocations. Using reflections would allow us to use the same objects both on client and server;"

    I disagree with this. The JDO spec makes explicit the behavior of enhanced objects during serialization. None of the flags added as part of enhancement is sent over the network during serialization. There are few added fields anyway. From the spec:

    "The explicit intent of JDO enhancement of serializable classes is to
    permit serialization of transient instances or persistent instances to a format that can be deserialized
    by either an enhanced or non-enhanced class. ... If a standard serialization is done to an enhanced class instance, the fields added by the
    enhancer will not be serialized because they are declared to be transient."

    You can use the same objects on the client and server. JDO also makes explicit that enhanced classes will work just fine with RMI and EJB.

    I think there is a general misunderstanding out there about what JDO bytecode enhancement does. All it does is add a few flag fields and intercept field reads/writes to delegate to the vendor provided classes (which of course DO NOT intrude on the domain object model or in any way form part of the closure of instances).
  64. We already won[ Go to top ]

    Keiron, David, Abe, Patrick
    (plus a special thanks to Geoff :-)

    Don't waste your time in trying to answer to TopLink, Hibernate, Castor and others.

    They already lost the game with their proprietary offers(even if TopLink and Hibernate are good tools, this is not the point). If they always try to spread some FUD about JDO, this is just because they don't have anymore technical argument.

    The question is not about reflection against enhancement it is about ease of use, standards, fully transparent persistence...
    We all know that fully transparent persistence cannot be achieved through reflection.
    We all know that runtime reflection has a cost (see conclusions from the Rubis benchmark).
    This has been already discussed with Jeff Norton (former ObjectPeople, probably now Oracle) in Summer 2001 and he finally agreed.

    The best proof is that any thread on TSS about CMP or TopLink is very quickly transformed on a thread about JDO. Dot.

    I attended the Sun "Tech Days" in Paris in December.
    Dennis Leung was here.
    They said Dennis need to go back very quickly in US, so only one question was taken from the audience.
    The only question was: "Is TopLink JDO compliant ?"
    The answer was "we support JDO but we don't implement it".

    They already lost, believe me.



    Best Regards, Eric (JDO vendor, www.libelis.com).
    LiDO, Enterprise Information Access.
    with JDO: Just Do Objects.
  65. We already won[ Go to top ]

    Eric,

    (rereading your post because I cannot believe it)

    Such incredible arrogance combined with blatant "us vs them" separation definitely doesn't help in terms of understanding the issues that we are talking about. If that's close to the attitude of the JDO expert group (are you a member?), then I'm slowly beginning to understand...

    What we need is open discussion of technical merits, without commercial underpinnings. As an application architect I want to understand the technical options that are available, not simply adopt the result of a PR war.

    You are just building walls, showing a disgraceful attitude. Despite my potential interest in JDO, I won't even consider buying a Libelis product any time soon, and I'm known for spreading my opinion. You've just done your company a great disservice.

    Juergen

    P.S.:
    Are there actually any convinced JDO users posting on this forum? I get the creeping impression that the JDO proponent group here solely consists of JDO implementers...
  66. We already won[ Go to top ]

    You are absolutely correct. Especially regarding LiBELIS. When we have been doing evaluation of JDO products LiDO had been found one of the worst. Strange behaviour and NullPointerExceptions in the GUI mapping tools in combination with absence of support responses.
      To my mind Sun does not have a right to make new standards that are not approved by leading vendors.
  67. We already won[ Go to top ]

    In my previous post saying "You" I meant "Juergen"...
  68. Participating in JDO 2[ Go to top ]

    Hi all,

    Juergen, Gavin: Did you tried to enter the JDO Expert group ?
    You can just susbcribe to the JCP (as an individual or representing your company), then you request the JDO expert group to be co-opted and then you'll be able to discuss with the other experts and present your ideas.
    In few days you can be part of that expert group.

    The JDO expert group is very open.
    For instance, we have these days a debate about the mandatory byte-code enhancement.
    To me, the question is not about "byte-code enhancement versus runtime reflection" (or any other mechanism you can imagine such as specific VMs, source-code pre-processing...) the question is we wanted to promote a standard for "FULLY transparent persistence in Java". If we can achieve this with runtime reflection, it is OK.

    I can remember we had long discussions with TopLink people in 2000 and 2001, about this and the conclusion was "FULLY Transparent persistence cannot be achieved with runtime reflection based mechanisms".

    JDO is not "just an API" because with APIs you cannot achieve fully transparent persistence unless the JVM or the JAVAC compiler are modified. Byte-code enhancement must be seen as an extra compilation step, nothing else.

    Byte-code enhancement is not a JDO invention. You can go to the BCEL site and see by yourself a lot of possible uses. For instance most Aspect Oriented Programming tools also relies on byte-code enhancement. This is a very powerful and generic approach (and not difficult to implement).

    We could decide to transform JDO "from transparent persistence for Java" to something less ambitious as "a persistence API" but this is a big decision and what would be the interest for ?

    Right now, there is a standard. Some vendors decided to implement it, other vendors decided not to implement it. So what ? There are still Web Servers that are not J2EE compliant and they have their own market and they don't ask the J2EE expert group to change the standard to make them compliant.
    Nobody said a non-JDO product is a bad product.
    It seems that non-JDO vendors are upset because they are not compliant with the standard. This is their decision. They can easily be compliant (see for instance what OJB does).

    We could twist the standard in order to support all approaches (pre-processing, post-processing, specific VM, runtime reflection, dynamic proxies) then all products will be JDO compliant. But what are the real benefits for the customers in that case ?
    Having a standard so thin that any product can be compliant ?

    I don't want to decide whether byte-code enhancement is better than runtime reflection. This is a kind of holy war (there is the same debate in the J2EE arena where the 2 approaches are also discussed.
    See the Rubis benchmark for instance:
    http://www.cs.rice.edu/CS/Systems/DynaServer/RUBiS/).

    Even byte-code portability is not a big question.

    The only question is about fully transparent persistence.
    If we can achieve that JDO goal with runtime reflection I will support it.

    Best Regards, Eric.
  69. Participating in JDO 2[ Go to top ]

    Right now, there is a standard. Some vendors decided to

    >implement it, other vendors decided not to implement it. >So what ?
    >There are still Web Servers that are not J2EE
    >compliant and they have their own market and
    >they don't ask
    >the J2EE expert group to change the standard to make them
    >compliant.

    Some of vendors can not implement it, they are discriminated. I have tried to download JDO, but SUN download center says something like
    "you are form the bad country". Is it standard for me?
  70. We already won[ Go to top ]

    I personally converted an application built on a RAD framework and BMP to 100% compatibility with JDO in a very short time. Our business guys had gotten tired of all our excuses based on entity bean's weaknesses, so we looked at alternatives. We are a standards based company (having been burned on our vendor-locked OR/RAD framework), and JDO seemed promising enough to us. And surprisingly enough, most of the time was spent undoing all the Entity Bean hacks and turning our data model back to what we had originally envisioned.

    The performance immediately doubled, we were solved old legacy locking problems, and also reduced our codebase by a quarter.

    As illustrated by Marc and Keiran, there are some strong technical reasons for picking JDO. It was sophisticated enough for my enterprise application (personal finance software), and yet easy enough to map legacy data with simple bean objects and a trivial API.
  71. Re: We already won[ Go to top ]

    Eric,
     
    In addition to the rather ...ummmm.... surprising tone and content of your posting, there are several factual errors that need to be corrected.
     
    1. There has never been an employee of the TopLink team either at The Object People or since by the name of Jeff Norton. I am not sure who you were talking to, but I can assure you it was not a senior member of the TopLink development or management team.
     
    2. Dennis Leung was not at the Sun Tech Days event in Paris. I was the presenter I believe you were referring to.
     
    3. It was never said that I needed to leave quickly to return to the US. Actually, both Dennis and I are Canadians.
     
    4. There were certainly more than one question. In fact I stayed for 45 minutes after the talk discussing technical issues with interested audience members.
     
    5. The answer I gave to the question you refer to in your posting was absolutely not "...we support JDO but we don't implement it". The answer was fairly lengthy, but if you were to try to condense it to a sound bite, it would be more along the lines of "...we support the aims of JDO but have chosen at this time to only implement the portions of the spec which we believe add value".
     
    I am not interested in joining the debate raging on whether JDO as it stands today is the one right way to do persistence. Suffice it to say that I believe that there are many ways to solve this problem, most of which have some merit in different applications and application architectures.
     

    Mike Milinkovich
    Oracle Corporation
    (TopLink vendor)
  72. more .....[ Go to top ]

    """Then if bytecode modification is SO superior, as is claimed, then users and implementors will all choose this superior technology, correct?"""

    We all know that in the real world, this is not true. Having the better technology is an advantage, but does not guarantee success. You probably think that [insert your favorite persistence tool here] is better than CMP beans, and you're probably right, but is it more popular? No. Because marketing has convinced the decision makers that CMP is the way to go.

    The same thing could, and probably would, happen with bytecode modification vs. reflection. Bytecode modification allows for some pretty amazing stuff that reflection simply cannot do, the most obvious being lazy loading of any field and instant change notification.

    Could JDO accomodate reflection-based approaches? Yes, I think it could, with some changes to the spec (the spec sometimes assumes that the JDO implementation instantly knows when a field is modified, and that would not be the case with reflection).

    *Should* JDO accomodate reflection-based approaches? I personally think it should. Reflection has some interesting advantages of its own, and anything that might bring JDO to a wider audience is a good thing, in my mind. But let's not get upset at the spec team for trying to promote what they see as a superior technology in bytecode enhancement. And let's certainly not buy into the FUD being spread by some reflection-based vendors (only some!), who will just find something else to complain about if JDO eases the enhancement requirement.

    Instead, let's encourage users to try out both approaches and see which one suits their needs. Let's also encourage O/R vendors to let the JDO spec team know that they would support JDO if only it stops requiring enhancement. Without support from vendors, what motivation would the spec team have for changing the system? Especially when developers who actually try JDO already almost universally enjoy using it.
  73. re: more .....[ Go to top ]

    "Then if bytecode modification is SO superior, as is claimed ..."

    I don't think it's even a question of being "superior". It is just one way to solve certain types of problems. Further, I talked to a JDO vendor fairly recently and they claimed that the JDO spec does not require byte code manipulation at all. Is that true? If byte code manipulation is truly required, then I can understand your concern, but I assumed that you could support JDO without it.

    As for the term, I guess "byte code manipulation" seems so scary ... but isn't that what JAVAC does when it builds a .class file?

    Peace,

    Cameron Purdy
    Tangosol, Inc.
    Coherence: Easily share live data across a cluster!
  74. "Then if bytecode modification is SO superior, as is

    > claimed ..."

    > I don't think it's even a question of being "superior".

    Actually the architects did claim this - myself and many other engineers helped to disprove the myth however.
     
    > It is just one way to solve certain types of problems.

    Yes, and in a language like C++ it's actually a great method. In Java however, it's problematic.

    > Further, I talked to a JDO vendor fairly recently and
    > they claimed that the JDO spec does not require byte code
    > manipulation at all. Is that true?

    Sounds like they either were trying to mislead you or didn't understand the spec. To be compliant the java classes must implement certain code. The code must either be added by bytecode or source code manipulation - yuck!

    > If byte code manipulation is truly required, then I can
    > understand your concern, but I assumed that you could
    > support JDO without it.

    No you can't support JDO without mangling either your source or bytecode. This paradigm was inherited from C++ where this was necessary. In the Java world however it's just unnecessary and problematic.

    > As for the term, I guess "byte code manipulation" seems
    > so scary ... but isn't that what JAVAC does when it
    > builds a .class file?

    No - that's compilation. Manipulation changes the APIs and class structure to something other than you defined. This is closer to a 'side effect', than a compilation process.

    CocoBase provides very high performance O/R mapping that doesn't require bytecode manipulation. When the JDO folks were claiming bytecode manipulation was the only way, we had enterprise customers doing millions of database operations a day on our layer based on reflection. Too bad they didn't ask us when they made this assumption :(

    Sun promises to fix all of this in the 2.0 spec by being more inclusive of vendors like us to make sure the 2.0 spec doesn't get so far off track. Hopefully they'll live up to their promises :)

    I personally think a great opportunity was squandered on the 1.0 spec - too bad...

    Just my $.02
    Ward Mullins
    CTO
    THOUGHT Inc.
    http://www.thoughtinc.com
  75. Enhancement is not required[ Go to top ]

    Further, I talked to a JDO vendor fairly recently and

    >> they claimed that the JDO spec does not require byte code
    >> manipulation at all. Is that true?
    >
    > Sounds like they either were trying to mislead you or didn't
    > understand the spec.

    Incorrect: the developer is free to implement the javax.jdo.spi.PersistenceCapable (see the docs interface themselves. Manipulation is not necessarily required, althogh it is much easier to do so.

    --
    Marc Prud'hommeaux
    http://www.solarmetric.com
  76. Enhancement is not required[ Go to top ]

    Incorrect: the developer is free to implement the

    > javax.jdo.spi.PersistenceCapable (see the docs interface
    > themselves. Manipulation is not necessarily required,
    > althogh it is much easier to do so.

    What do you call having to change your source code? That's manipulation. Don't mislead customers in this way. JDO requires the Object model be changed, either through changing the source code of every object in your model, or bytecode manipulation.

    Come on, get real... I know you guys want to sell your product, but don't mislead them in this way, it's unprofessional...

    Ward Mullins
    CTO
    THOUGHT Inc.
    http://www.thoughtinc.com
  77. RE: Enhancement is not required[ Go to top ]

    You said that the JDO vendor was lying when they said that "the JDO spec does not require byte code manipulation at all". I showed that that is incorrect.

    <quote>I know you guys want to sell your product, but don't mislead them in this way, it's unprofessional</quote>

    Indeed, misleading customers and attempting to obfuscate a discussion of the merits of different persistence systems is very unprofessional, which is why I responded to your post.

    --
    Marc Prud'hommeaux
    SolarMetric, Inc.
  78. All these posts about how byte-code enhancement is bad seem to be coming from vendors of various persistence frameworks. While I see some merit to the objections to byte-code enhancement, as an application developer I really DON'T CARE if my byte code is enhanced. JDO provides a very elegant and simple persistence mechanism that allows me to focus on my application business logic without worrying too much about the underlying persistence. The fact that it is a standard (even if it is not an official J2EE one) is good because it lets me switch between competing vendor implementations. Competition keeps the price down - i remember investigating using Toplink and just could not afford it. It is only a matter of time before we have a good open source JDO implementation.
    The ease of use and the non-proprietary nature of JDO matters more to application developers that the issue of whether the implementation uses byte-code enhancement or reflection.

    Vijay
  79. Hi,
    I am application developer too, I contribute for open source projects and it because I need quality for my applications, I can not to ensure quality without source code. Byte code generation is The Good Thing, but JDO oweruses it and vendors try to sell this experiment as standard. CMP is a standard too ... .
    See TJDO,XORM,OJB if opensource projects are interesting for you, I think the most of JDO users know them.
    I do not afraid bytecode, I use it too (see http://cglib.sf.net, it is experiment at this time too) it is not any kind of standard, but if your will get a
    ClassFormatError you can fix it yourself and you do not need to wait for the next version or seach for the better implementation, if you can't understand this code please do not use it. I have practice
    with the some market leader "Support" and am not going to implement closed source JDO implementation, closed code for coders is The Useless Thing.
    But I like the most of ideas in JDO and I want to see it more open and better API, before to use it.
    I play with JDO at home and in open source projects, but I am not going to use it for production at this time .
  80. I have found one of the biggest barriers to componentizing an enterprise application is the question of "how to I get my tables created when the app is deployed?&#8221; Often the answer is one provides a bunch of DDL statements in a SQL script that have to be run by the DBA ...yuck. Database setup is a huge barrier to plug-and-play EAR based applications that need to create tables.

    Now I am just musing here, but if the App server itself included a JDO runtime engine, the app server could create a default OR mapping at the time the JDO based component was deployed. By looking directly at the "jdoFieldNames" array that is embedded in the enhanced bytecodes, there would be no need to distribute a proprietary O/R mapping with the application. Unless, one wanted to override the default O/R mapping produced by the application server.

    This comes full circle to the issue of (perhaps) why JDO specified binary compatibility for the enhanced bytecodes. Binary compatibility insures that I can look inside the classfiles (ala BCEL) and find the "jdoFieldNames" array etc., thus being able to determine what fields are enhanced and consequently what tables need to ne created in the DB.

    I've overlooked a gazillion issues, such as:
    What would a JDO component be? Some new lightweight J2EE persistence component? Lots of hard questions and areas for research, but I see huge promise in JDO's bytcode compatibility to address the issue of automated deployment/setup of persistence capable components.
  81. It is good solution in .NET (Attributes), but it is not solution for all problems too.
    Deployment tools need more info/metadata to create schema.
    JDO uses XML to store metadada, but you will need to rewirte it manualy (possible generated) grabage if your will decide to mirate to "better" implementation.
    You will need to rewrite code too, see optional features.
    It is better to remove optional features from specification
    if portability is goal of this specification.
    It is no good way to define standard attributes for "Transparent Persistence", it can use RDBMS or file to store objects. Pseudo standard attributes/metadata is
    not very usefull and it can be unspecified. It is used by enhancement, but
    PersistenceCapable/StateManager do not solve "Transparence"
    (see arrays,blobs, public fields).
    The most of good existing tools implement persistence without this contract, It is not usefull for user and can be uspecified too.
    Current JDO specification conflicts with technologies proved in the real life and inovations because It tries to specify to mutch.
  82. A vendor'[ Go to top ]

    I now question what that value is in being able to take a JDO-enabled app class binary and just 'run it' in a diff. vendors product. That seems to be the reason why they required the bytecode stuff, so that all the diff. jdo vendors could read the same classes. Is this some remnant dream from the early days of EJB when Sun cared about creating reusable business components?

       To require byte code compilation just to enable this class reuse sounds unpractical. I mean, no matter what, if you take some code and want to run it in a different vendors product, you will HAVE to go through a compile/config cycle anyway, just like with J2EE apps. So the idea of having reusable binaries is not useful in the real world. What is important is reusable/compatible source code.

      So why doesn't JDO just standardize on what's really important, which is source code compatibility and specifiying all the runtime semantics (like lifecycle, caching, etc). Let the vendors worry about how they actually make the JDO source code persist.
  83. A vendor's response[ Go to top ]

    Sorry about the last, incomplete posting: I hit enter by accident.

    <quote>I mean, no matter what, if you take some code and want to run it in a different vendors product, you will HAVE to go through a compile/config cycle anyway, just like with J2EE apps.</quote>

    That isn't necessarily true. Do you mean that every persistent class will require custom vendor extensions in order to choose which tables to map to? A good JDO implementation will allow a vanilla .jdo metadata file to map to a database with an automatically named schema. Kodo allows this, as do some other JDO vendors.

    The idea that you can take one vendor's pre-defined persistent classes and drop it in to your application is a very powerful one.
  84. <quote>A good JDO implementation will allow a vanilla .jdo metadata file to map to a database with an automatically named schema. Kodo allows this, as do some other JDO vendors.</quote>

    That's nice. But the problem is, 90% of the time, you are working with an existing database schema, which won't match your automatically named schema (what are the chances your CUSTOMER table is named XCUSTOMER?). For the remaining 10% where you are developing a new project from the ground up, your DBA will still kill you if you try to auto generate the database schema (and name your CUSTOMER table XCUSTOMER).

    Without the O/R mapping layer specified, the JDO API is going nowhere.
  85. I have seen the point made that JDO should focus on the O/R mapping instead of just the javax.jdo API.

    I strongly disagree with this. I don't think the mapping should be part of JDO because JDO is supposed to stay out of the persistence style. (for example, could be a flat file, RDBMS, LDAP, or ODB).

    The argument that JDO "will go nowhere" without an O/R mapping standard also seems wrong. I would argue that a good way to insure that JDO goes nowhere would be to try to get agreement on JDO WITH an O/R mapping. (Bearing in mind of course that mapping only makes sense for RDBMS backed datastores, so a standardized O/R mapping makes no sense anyway for JDO)

    It is not so much work to learn a mapping, especially when tools will provide default mappings from the enhanced classes.
  86. generating database schema[ Go to top ]

    "For the remaining 10% where you are developing a new project from the ground up, your DBA will still kill you if you try to auto generate the database schema (and name your CUSTOMER table XCUSTOMER)."

    I disagree with this. Generating the schema from classes is a *huge* timesaver on a new project. You can extend your model as quickly as you can create the classes and add fields. Just run "ant create-db" and you have a good database schema!

    You can also generate a script and give it to the DBA to fiddle later on in the project. By then you should be done with model changes.

    Some implementations (like JDO Genie) support configurable and user defined name generators. So if you do not like the naming conventions you can change them without having to hand edit the schema.

    Cheers
    David
  87. "Without the O/R mapping layer specified, the JDO API is going nowhere."

    This would be nice but is not essential. There are dozens of O/R mappers out there. Everyone has their own idea on how to do O/R mapping. Leaving this out of the spec leaves vendors free to compete and optimize mapping.

    If you change JDO vendors you just need to map your classes to your existing schema. That wont take long. Your code (the hard part) remains as it is!

    Contrast this to what happens if you are using a proprietary O/R mapper and you want to switch - you have to throw away most of your project.

    Cheers
    David
  88. Rong,

    The way bytecode / sourcecode enhancement works in the JDO spec, vendors can quite easily access all mapping logic (and all proprietary logic, for that matter) without doing anything but vanilla enhancement. That is, a JDO implementation (such as Kodo, made by the company I work for) can use proprietary extensions such as object/relational mapping information in conjunction with any object that implements javax.jdo.spi.PersistenceCapable.

    Our enhancer is purely by-the-spec. We introduce no extra code other than those methods and fields dictated by the specification during the enhancement process. We then analyze the JDO metadata at initialization-time and build up the appropriate data structures for performing mapping and other custom Kodo features.

    -Patrick

    --
    Patrick Linskey
    SolarMetric Inc.
  89. why binary compatibility[ Go to top ]

    My guess is that the binary compatibility requirement is a logical next step from the requirement for bytecode enhancement.

    Once it was specified that field manipulation would be detected via enhanced bytecodes around field reads and writes, this pushes all the interaction with the datastore to the StateManager and PersistenceManager, to whom the field manipulation is delegated.

    So this basically means that there is very little to what actually goes on in the manipulated byte codes. They just intercept and delegate to vendor provided classes. So the requirement for binary compatibility is almost a no brainer, once you have decided that bytecode manipulation is the right way to detect field manipulation.

    For all the reasons mentioned, and most notably total non-intrusion into your domain object model, bytecode enhancement is the best way to detect field manipulation. So the real issue at hand is not "why binary compatibility?", 'cause as I mentioned, that's a no brainer once you have decided to go the bytecode route. The real issue to debate is "should bytecode enhancement be mandated as a means for detecting field manipulation?".

    I have seen the arguments on both sides of the coin and the loudest arguments against bytecode manipulation come not from users but from vendors who already have a product that will be difficult to retool to work with bytecode enhancement. It must be painful to have designed a product and then JDO comes along with a more elegant way of doing things.

    That said there are multiple vendors of mature JDO product, that are not "dot-bomb", have never taken any venture money, and will be around tomorrow and the next day. Furthermore, if your JDO vendor does go chapter 11, it is not such a big deal as if you had used a proprietary API.

    So, I have to again ask that that JDO vendors who have stable product get out here and make some noise, lest the anti-JDO vendors be the only ones who are heard. I think an excellent way to demonstrate the viability of JDO would be to do a JDO petstore, or to redo the Rice university "Bidding" benchmark with JDO.
  90. Rubis Benchmark[ Go to top ]

    Hi Geoff

    "or to redo the Rice university "Bidding" benchmark with JDO"

    There is a proposal being discussed among JDO vendors to do exactly this. Hopefully this will happen.

    Cheers
    David
  91. I'll bet that not only will the JDO implementations flat out demolish the CMP implementation's performance in the redone Rice University Bidding benchmark, but the lines of code will be the smallest of all the methods examined in the benchmark (CMP, BMP, straight JDBC, etc.)
  92. Floyd,

    <quote>To require byte code compilation just to enable this class reuse sounds unpractical.</quote>

    This is certainly not the reason that the JDO spec chose to go with an enhancement approach. Class reuse across vendors is a good thing in any API, but enhancement (sourcecode or bytecode) is part of the spec largely to allow for encapsulation of data and for efficient, transparent lazy loading.

    -Patrick

    --
    Patrick Linskey
    SolarMetric Inc.
  93. I dont share your opinion about loaded questions - and I certainly dont believe there is any JDO bashing going on in this interview.

    There are a few doubts in customers' minds about JDO - some of them reasonable, some of them are not. As you say, its very new. So customers are bound to have questions.

    Your arguments regarding O-R vendors "fear" of JDO are not substantiated when you look at the rest of Java & J2EE. Standards tend to attract vendors rather than repel them (unless there is something in the standard they particularly dont like!). So, the questions are valid: Why dont *any* of the O-R vendors like it?

    As I have stated in the past, the questions for me extend to "Why arent the RDBMS vendors supporting it - or even announcing support for it?".

    For a standard to succeed, there must be vendor support for it. As for "catering to the needs of developers", its all very well and good, but a successful standard has to cater to the needs of everybody - including the vendors. After all, they have to invest and risk their money to build the product.

    -Nick
  94. Oh yeah ...[ Go to top ]

    ...and one more thing: Will all the anti-JDO folks just identify up front what O/R mapping vendor you work for. Your postings are giving the impression that application developers are unhappy with the JDO API, which I don't think is the case.
  95. Oh yeah ...[ Go to top ]

    Geoff,

    <quote>
    Furthermore, as a developer, I like the fact that the JDO bytecode approach never intrudes into my domain object model. I don't have to extend any special classes or make any fields public, etc.
    </quote>

    Again: With reflection-based toolkits, you don't need to extend special classes, and you don't need to make fields public. That's not an advantage unique to JDO's approach, although details obviously differ. Have you ever tried Hibernate, OJB, Castor, CocoBase, or TopLink - or even bothered to read their documentation?

    Just for the record, I don't work for an O/R mapping vendor, never have and probably never will. My main interest is simply choosing viable solutions for the application projects that I am involved in. My second driving force is technological curiosity. I have been watching the evolution of persistence solutions for some time now, including JDO since its public review stage.

    What I have noted repeatedly is that JDO proponents seem to treat JDO as magic bullet and ignore valid alternatives! I can assure you that many developers will choose popular open source solutions that are sophisticated enough, proprietary or not, just to avoid commercial vendor lock-in. JDO doesn't shine in this respect! Why should an application developer care about a "standard" that seems to gain little acceptance? The advantage of choice is not too convincing under such circumstances...

    Juergen
  96. Oh yeah ...[ Go to top ]

    In fact I have looked into Hibernate and it looks pretty good. I think you are missing my point. I am not here to say that solutions like TopLink and Hibernate are not good. I am saying that the JDO spec IS a good thing. Is it all things to all people? ...no. However, some people on this thread seem to be bashing JDO for NOT being a magic bullet, while at the same time chastising JDO proponents for believing it IS a magic bullet. The implication they make is that there is a magic bullet spec, but JDO just isn&#8217;t it. The bashers can just wait forever for a spec that is a magic bullet, because it will never come.

    The corporate JDO bashers would have us believe that the real reason they bash JDO is because it is incompatible with their implementations that can't use bytecode enhancement. They would have us believe that they really would like to see a unifying API for object persistence. But, the impression I have gotten from the vendor backed bashing is this: a spec means there will be implementations, many of them to choose from and switch out at will. And, horrors, open source, free implementations. A JDO spec will cause a flourishing of alternate implementations that will compete with their market share. What I smell is fear. It's all about money.

    Do not believe that the motivations of the corporate JDO bashers are altruistic. The lady doth protest too much. If in fact they believed the JDO spec was flawed and could never lead to practical high performance implementations, they would sit back and watch the whole thing collapse under its own weight.
  97. One such valid alternative is the JDX object-relational mapping technology from Software Tree. Currently shipping in version 3.5, JDX has been designed and developed with simplicity, non-intrusiveness, flexibility and scalability as overriding considerations.

    Many innovative features, which elegantly address the common but complex issues faced by Java developers related to object and relational modeling, mapping specification, clean APIs, optimized data access etc., have gone into JDX making it an ideal solution for most persistence needs as evidenced by many customer testimonials.

    The point is that there are more than one ways to skin the cat. Instead of getting into religious arguments, it may be best to check out different offerings to decide which one meets your application needs best based on things like architectural fit, performance, robustness, ease-of-use and budget. If you need a needle, a sword may not be helpful. Similarly, if you need a Swiss-knife, a toothpick won't be enough. However, I am sure someone will like to sell you a Swiss-knife even though what you really need is a toothpick:)

    Cheers,

    - Damodar Periwal

    Software Tree, Inc.
    Simplify Data Integration
    http://www.softwaretree.com
  98. Hi,

    There has been some great discussion, enjoyed reading the threads. A common question is why can't JDO also support reflection as well as bytecode enhancement.

    This got me thinking as to how a reflection based JDO would work and what would JDO need to change to accomodate it...and it raised some questions.

    Take a simple class:

    public class Person {
      private String name;
      private int age;
      // Methods omitted
    }

    With JDO today I could have an application that creates a new Person in a transaction, commits the transaction and then retrieves that same Person again thus:

    PersistenceManagerFactory pmf = ...
    PersistenceManager pm = pmf.getPersistenceManager();

    pm.currentTransaction().begin();
    Person me = new Person("Keiron McCammon", "32");
    pm.makePersistent(me);
    pm.currentTransaction().commit();

    pm.currentTransaction().begin();
    System.out.println("My name is: " + me.getName());
    pm.currentTransaction().commit();

    With JDO the 'me' Person instance transitions to being "hollow" after the commit. To access the 'me' instance in the new transaction I just access it and JDO under the covers ensures that the instance is read again from the datastore. If I tried to access 'me' outside of a transaction, JDO would throw JDOUserException.

    This is all nice and transparent, but I'm not sure how this would work when using reflection. With reflection what happens to 'me' after the transaction commits? And how would it know when I call 'me.getName()' that the fields need to be read again from the datastore?

    Cheers
     - Keiron
  99. Hi,

    So now its got me thinking some more. Lets say I read 1000 Person instances in a transaction and change the name of one.

    The JDO runtime knows which instance was changed from the 1000 and on commit writes back just that instance.

    How would this work if using reflection? How do I know which instance has been modified?

    Cheers
     - Keiron