Interesting Recent Developments in Java Data Objects (JDO)


News: Interesting Recent Developments in Java Data Objects (JDO)

  1. There have been some interesting developments in the JDO world recently. Sun's Forte 4 IDE is dropping its JDO support (which was written by the JDO Spec Lead Craig Russell and Sun's JDO team) while recommending that Forte customers use Cocobase as an alternative. Ironically, Craig and the JDO team at Sun has been re-assigned to implement the Sun One AppServer's CMP engine. Earlier this week, JDO 1.0 spec and RI were released.

    The choice to drop transparent persistence support from Forte was not officially explained, although it has been noted that the Forte transparent persistence (TP) only supported JDO spec 0.5, and this played a role in it not being included in version 4. Further spicing up the drama, Forte Product Manager Dan Roberts recommended using Cocobase as an alternative. Cocobase is one of the leading O/R Mappers which is built by Thought Inc., one of the most vocal opponents of JDO on TheServerSide and other newsgroups.

    Ironically, JDO Spec Lead Craig Russell and the transparent persistence team that had been working on the TP tooling for Forte, have been assigned the task of implementing the CMP engine for the next version of the AppServer formerly known as iPlanet (Sun One AppServer). In this new role, the TP team will be implementing Sun One's CMP engine using JDO, as defined in chapter 16 of the JDO spec.

    Finally, JDO 1.0 spec, the reference implementation and the test compatibility kit were released earlier this week. Vendors will likely be spending the next few months completing their implementations and claiming full compatibility.

    Another interesting event in the JDO world is the emergence of the website, a completely grassroots JDO portal that has been built by a consortium of JDO vendors. It is rare to see all the participants in a particular industry gang together to grow a community for their users.

    Read Forte Product Manager Dan Roberts Statement of Feature Removal, from the Forte EAP Mailing list.

    Read JDO Spec. Lead Craig Russell's Statement, from the Forte EAP Mailing list.

    Threaded Messages (105)

  2. Clicked on the link for cocobase. $6000/developer.
    That's a tad bit too much for me and the Northwest Alliance for Computational Science and Engineering. Tools like this are nice, but are they really worth $6000/developer?

    That's not a rhetorical question. I really wanna know who buys it, and if they felt the expense was justified.

  3. That is pricy, but there are no runtime costs for it. TopLink on the other hand is completly out of their minds to think that they can change what they charge. We've used TopLink for a couple of years (it's a great product), but their runtime costs are going to push customer over to Cocobase. We have recently purchased Cocobase for our next project. If you have a project that runs $75 - $100K a pop, you can easily split 10-15 developer seats up and not even notice.

    However, if you want a good open source ORM, take a look at Object/Relational Bridge at: I've recently started using this for my Struts book and I've got to say, what a good product this is. There are some issues, but overall, I'm really liking it. There are many more out there of course. Here are a few that I've found doing research for one of the chapters:


  4. You must have easier managers/bosses than I have. Both my current projects are about over $800,000 a pop, and I think I'd have a hard time getting my boss to spring.

    The problem is actually one of mentality. I would have a much harder time justifying this to the boss. "Whaddya need this for? we never used it before. What's wrong with JDBC, or BMP/CMP in the app-server?" etc etc.

  5. How much time do you spend on your O/R development?
    With tools like Cocobase, TOPLink, and Castor, you can really minimize the code, and get the transparent persistence that you want. The tools make it easy to get the objects mapped etc, then you have the runtime which can do heavy caching, and can get great speed improvements.

    I know that I would never want to do a CMP Entity with relationships by hand!
  6. Price Of Cocobase[ Go to top ]

    Cocobase is priced per developer that works on the code whether they use the tool or not (the notorious background users), and I believe is per project So if my project is $100,000 and 5 programmers are working on it that's a big chunk of the budget gone, even with some discounts for people who just happen to have caught site of the disc.

    I'm sure it's a great product but tools like objectmatters vbsf and open source projects like castor etc can provide the same for a fraction of the cost.

    Cocobase's argument about time saved etc is persuasive until you look at how high the standard of competition has become.
  7. Price Of Cocobase[ Go to top ]

    I wouldn't say that Cocobase is the best, but so far has been a good product. It could stand to have a better OQL facilty. TopLink's is very nice. Of course, it's been around for a long time.

    Regardless of the product, open source or not, unless you're in the business of ORM, you should get yourself a persistence mechamism other than "SELECT XXX FROM XXXX WHERE XXXX".

    I also agree with Nick. The open source versions that I prototyped while trying to find one to use in the Struts, all had some decent features. The problem is finding the one that has all the features you need:

    Transaction and Nested Transaction Support
    Primary Key Auto Increment Support
    Indirection or Proxy References
    A Good OQL

    This is where commercial products may have a one up right now on open source solutions. However, there are a few that have all of these features and more. Not every project has the same requirements and you've got to find the right one for you.

  8. Price Of Cocobase[ Go to top ]


    What are the current alternatives to TopLink and Cocobase?

  9. Price Of Cocobase[ Go to top ]

    I listed several alternatives just several posts up. There are actually many more out there that are listed here. These are just the ones that I found with a small amount of searching.

  10. Price Of Cocobase[ Go to top ]

    A good alternative to Cocobase and TopLink is ObjectFrontier, very smart and top of the J2EE technology, basically they cover from JDO to EJB (CMP, BMP) and JCA:

    I have used Castor and it worked great on OQL and 1-N relationships and is the easiest open source product I have ever used for mapping RDBMS to Objects. The documentation is not great though.
  11. Price Of Cocobase[ Go to top ]

    Another one that is very reasonably priced for EJB 2.0 persistence is

    purchased at Flashline for $199 per developer with a redistributeable run time. Works with JBOSS, Weblogic, HPAS, Websphere and Orion. They include the source code for the product as well.
  12. Price Of Cocobase[ Go to top ]


      Can you share which open source products met most or some of the features you were looking for (or should we wait for the chapter)?

    Also, what do you mean by "Indirection or Proxy References" on the context of O/R mapping?

  13. Price Of Cocobase[ Go to top ]

    Indirection or "Proxy References" are used for "lazy loading" of object references or collections.

    Say if I have an object that has a 0..* releationship, using indirection the collection of the parent object will not actually get populated until its first accessed. This also works for 1..1 and M..M. A few solutions actually support lazy loading of simple attributes as well.

    As far as open source implementations go, so far I like object/relational bridge. My criteria for the book was:

    1) Free
    2) Non-intrusive (I didn't want to import proprietary interfaces in my bo's)
    3) No byte-code manipulation
    4) Support for Optimistic locking, transactions, caching, etc...
    5 Decent documentation

    There were several that meet this criteria, but object/relational bridge really stuck with me and is performing well. I'm not building some huge J2EE project with it, but so far, so good. One thing that is really nice about it is that it's very modularized and extensible. For example, if you don't like the caching mechanism, you can replace it with one of your own. A very powerful feature. The other nice thing is that all of the mappings are specified in a single XML file. That's one thing that turned me off about several of the other open source frameworks is that things ha to be done all over the place.

    Keep in mind that I didn't consider commercial products to use with the book, because I wanted to ship the product along with the chapter so that readers could run the complete example. All they had to do was get a database.

    As far as commercial goes, I've used TopLink a great deal. We have been using Cocobase lately. Both are good products. TopLink has the edge on feature set a little, just do to the fact that its been around for awhile. I've heard great things about the product from Object Frontier as well. I'm not sure what the cost is however.

  14. Price Of Cocobase[ Go to top ]

    If you're looking for an Open Source product, take a look at Castor []. It meets all the criteria that you've laid out and more. Castor consists of two major components - JDO and XML. Note that Castor JDO is *not* compliant to the Sun JDO spec. (I believe it was around before the Sun JDO spec even came about.)

  15. Price Of Cocobase[ Go to top ]


    We at ObjectFrontier recently modularized/re-packaged the product to make it affordable to our customer base. Our products are now offered as follows:
    **FrontierSuite for Reverse Engineering
    **FrontierSuite for New Application Development
    **FrontierSuite for migrating EJB1.1 applications to EJB2.0 Applications and for migrating applications from one app server to anpother app server(DeployDirect)
    **FrontierSuite for JDO
    All the above includes our sophisticated persistence engine with JMS based distributed caching mechanism. We also support Optimistic, Pessimistic, Read Only, Blind Update modes, Indirection or Proxy References, EJB QL and more.
    By modularizing we are now able to offer very competitve pricing model. Our pricing ranges from $ 1099/developer license to $ 6,000/developer license (for all the above modules).

    Shirish Shetty
    SShetty at ObjectFrontier dot com
  16. Price Of Cocobase[ Go to top ]


      I've heard nothing but good things about the product. In fact, I've worked with Dominic on a security project and Srikanth and I worked together building a couple Internet banking products for Harland Inc. They are both very smart and I'm sure it's a great product. My response above was to a question about open source solutions.

    I also heard that ObjectFrontier was about to land a big customer in Europe, and that the product underwent some big evaluation and stood up very well against the other commerical products.

  17. Price Of Cocobase[ Go to top ]

    To me, that does not look like a scheme for increasing transparency - it rather adds confusion. What would I have to buy if all I need is a JDO library (not some "suite". "Suites" are for suit-wearers:-)) that allows me to persist my objects in a managed (J2EE, no CMP) environment?

  18. Price Of Cocobase[ Go to top ]

    Chuck Cavaness wrote:

    2) Non-intrusive (I didn't want to import proprietary interfaces in my bo's)


    I'm really enjoying some of the points that you are making here, Chuck. I've always been thrown a little when I here the phrase "non-intrusive" thrown around in relationship to O/R mappers. What exactly is it that people, in general, are looking for here?

    Is it non-intrusiveness in the sense that you can take any class instance and persist it without having to make any code changes to that class? Right now, I only know of a few approaches:

    - Bytecode modification, as in JDO. This is "intrusive" in the sense that your code is being modified - just by an external tool.

    - Description of public properties and methods of an object that can be called to persist/recover the object - possibly in the form of conformance to the JavaBeans spec. This relies on the object having public methods to both set and get its attributes, and also on a public constructor.

    - The explicit use of an API to persist/recover the attributes of an object. This still requires that the object have some sort of interface for retrieving and persisting those attributes. I really don't see how this mechanism is any different from JDBC, which formally, seems to fit the 'non-intrusive' criteria.

    Thanks and God bless,
    -Toby Reyelts
  19. Price Of Cocobase[ Go to top ]

    That's really what I was referring to, as Scott Ambler has written about in his various papers ( For me, the less my application knows about how it's being persisted, the better. That goes for bytecode manipulation as well. Although this is a different type of intrusion, it still personally bothers me.

    Just my opinions,
  20. Price Of Cocobase[ Go to top ]

    For me, the less my application knows about how it's

    > being persisted, the better.

    that sounds as compelling as it is questionable. Drawn to the extreme, this leads to developers that regard a database and SQL as "evil" - ususally ending up writing dead-slow applications.

    A database is a remote/distributed resource which is accessed under high concurrency conditions. Thinking you can completely delegate that away to some tool is naive. It is a danger with many O/R frameworks that they create an illusion of transparency, which you may not awake from until it is (almost) too late.

    Note that I am not debating the usefulness of O/R tools.
  21. Price Of Cocobase[ Go to top ]


    what is naive is to think that professional tools won't let you optimize the way SQL Statements are generated.

    JDO compliant O/R mapping tools will help you saving time for 99% of your business code, with "acceptable" response time. For the remaining 1%, most implementations will let you write SQL or will support any other relevant tun ing features.

    Cheers, marc
  22. Price Of Cocobase[ Go to top ]

    If you've ever had to switch persistence frameworks after a product was in production, then one would completely agree with this statement. This statement doesn't say that the application knows nothing about persistence, it just says that "less is better".

    Through the use of the "Business Delegate" and "DAO" patterns, a persistence framework cab be isolated and put in the proper place in a layered architecture. Controlling dependencies is what this statement is really about.

    Time and time again, developers think that putting SQL code right up in their business objects or whereever is going to make the code faster. That's nonsense. Vendors that build persistence frameworks are experts in this field, this is what they do. The same mindset that "we can build a persistence framework better" is what makes for poorly architected products. Unfortunately, not everyone is a JDBC expert. It's one thing to write "SELECT * FROM CUSTOMER WHERE XXX", but it's entirely another to understand transactions, nested transactions, concurrency control, isolation levels, caching, indirection, optimistic vs pessimistic locking, cascading operations, mapping strategies, etc...

    Databases aren't evil and neither is SQL, remember SQL is generated from ORM tools. What's evil is to couple an application to a particular solution or to embed SQL that's possibly database-specific into your application, both of which make maintenance and change more difficult.

    As far as performance goes, using an ORM is "really no slower than writing SQL manually. Most, if not all frameworks, give you a Query language that in effect, allows you to manually execute SQL if you choose. In many cases, the persistence framework will generate better SQL, because they have had time and feedback from many organizations, whereas how often is the SQL that we might write get peer reviewed?

    I suggest anyone who is confused by any of this, read anything that Scott Ambler has written about persistence frameworks. His main site is


    p.s. Maybe we ought to change the title of this thread :)
  23. Price Of Cocobase[ Go to top ]

    Chuck>>What's evil is to couple an application to a particular solution or to embed SQL that's possibly database-specific into your application, both of which make maintenance and change more difficult.

    And that's why JDO is so important as an emerging standard.

    cheers, marc
  24. Price Of Cocobase[ Go to top ]

      Good idea to get us back to the original thread idea :)

  25. Is JDO the answer?[ Go to top ]

    I think we are really talking about to things...

    First, I think we all agree that seperating business objects and how they are persisted is a good thing. And as Chuck has pointed out, Scott Ambler was written some great papers that go into great detail explaining why you would want to do this.

    The second thing is deciding how this should be done. This thread alone points out a dozen or so ways of implementing this. Cocobase, TopLink, Castor, JDO, CMP, BMP, direct JDBC, etc.

    JDO is just one posibility. And from what I gather (and for whatever reasons) it is not even close to the most popular idea. Just last week a news item was posted on this site announcing JDO's 1.0 release and there has been *very* little response. I'm not sure if JDO is too little/too late, and I am not sure that all of these other implementation are going to want to conform to the JDO standard. I'm not even sure if all of these other products *like* the JDO standard.

    As for OODBMS vs RDBMS, I'm not sure how relevant this is. The fact remains that for the vast majority of applications, the underlying persistant storage is a relational database. And I don't see that changing anytime soon.
  26. Ryan,

    A good question. But if you look at all the solutions that have been listed in this thread, they fall into two categories:

            a. Proprietary. You can put all the O/R mapping, persistence solutions provided by various commercial, open source movements into this category.

            b. J2EE CMP. This is a standards based solution for persistence.

    J2EE CMP solution is a very good attempt at decoupling persistence from programming and putting it in it's place - along with the rest of the plumbing services like transaction management, connection management etc. But it comes with a price tag - both in terms of money and application performance. Applications running in application servers are inherently heavy due to the variety of services that are on offer - whether you want them or not.

    Let us consider the alternative. Proprietary solutions, though good, have the disadvantage of tying up companies with vendors. But let us put this in perspective by saying that they have the best possible solutions for persistence as of today. And that is the reason why we see a lot of application server vendors turning to these persistence experts, of late. But the thorn of 'vendor dependence' remains, however good the solution. Why can't we have some solution for persistence that is based on standards that is application in all platforms, be it J2SE or J2EE ?

    And the answer to that question is JDO. It is an attempt at standardising the persistence solutions available in the market. Through this standard, we now have an opportunity to compile all the efforts in this area and compete in a transparent manner. The specification is not so rigid as to limit innovation among the competitors. There is more than ample space for any JDO vendor to provide his or her value add and still do it in a way that is standards compliant. The vendors benefit and the developers benefit. I think JDO is a win - win thing for persistence.

    Taking a realistic look at JDO, it may not be ideal solution to the problem - it does not follow the somewhat similar J2EE CMP model, it does not recognise all the relations we are used to in the object oriented world, but it sure is the best solution available. And the solution can only become better with all the persistence experts like ObjectFrontier, CocoBase, Toplink coming together and help it mature. There is no other alternative in sight to JDO as a standard.

    Viva JDO. Viva JDO
  27. Is JDO the answer?[ Go to top ]

    <snip> As for OODBMS vs RDBMS, I'm not sure how relevant this is. The fact remains that for the vast majority of applications, the underlying persistant storage is a relational database. And I don't see that changing anytime soon.
    I think this would be one very strong reason against JDO. As, when RDBMS is used underneath, CMP is probably as good as JDO (by and large).

    My view that JDOs are probably a good option for standalone simple applications. But once it goes into more complex applications, requiring say an AppServer, then CMP is probably the best option. Now, the appServer vendors may use JDO for persistence. But that is transparent to the application.


  28. Is JDO the answer?[ Go to top ]

    I think this [the underlying persistant storage is a relational database] would be one very strong reason against JDO. As, when RDBMS is used underneath, CMP is probably as good as JDO (by and large).

    Except JDO is simpler, allows you to build dynamic queries, gives you better cache control (including running queries agaist cache), can be used with or without app server, etc
  29. Is JDO the answer?[ Go to top ]

    and JDO is object-oriented, MUCH MORE efficient, and you don't need to write up to 6 classes for each persistent business object !
  30. Is JDO the answer?[ Go to top ]

    It seems the heart of the discussion is about performance.
    So, first of all, everybody seems to agree that O/R mapping tools can save a huge amount of time during development cycle.

    1. Performance of a system is not limited to SQL tuning. Only System Integrators don't admit this, because they sell consulting days to customers just to rewrite and rewrite again the same SQL statement changing the position of criterion within the Where clause. The truth is that most SI have a very bad understanding of DBMS internals.

    2. If performance was the big thing people would never use RDBMS that are well-known for being inefficient (this is the cost of all the benefits they bring in terms of security, fault tolerance, referential integrity, ...). If you are interested in performance just use ODBMS, but I think you will then promptly ask for reporting tools, ad hoc queries, DB replication... all things not related to performance. That's an important point for JDO. Write some Java business code, store it in an RDBMS with a JDO JDBC driver, tune it as far as you can, and then store it in an ODBMS. If you're really interested in ultimate performance, you will be able to experiment performance with no change at source code level.

    3. So, we all admit that performance is an important aspect of a system, but not the only one.
    The real debate seems to be : a generic tool/framework/layer cannot be efficient. This is definitely not true. Each time customers have paid to compare a manual JDBC development against an O/R tool, on a real complex business model, with real deployment constraints, the tool has shown better performance. Professional O/R mapping tool come with a lot of optimization features that cannot be designed by one business project team. The tool has been optimized because it has been used in various business and technical context. Kernel-skilled engineers have spent months to tune the access layer. Manual JDBC coding can only compete on trivial models (a la TP-C).

    On some special cases (less than 1%) a manual DB coding might be the only way to really tune the system. In that case the question is: "how easy/safe it is to mix manual coding and tools".

    cheers, marc
  31. To SQL or not to SQL[ Go to top ]

    there you have a new title..

    As my final remark to this issue: as I said, I was not debating the usefulness of OR tools. Just dont expect miracles in terms of "optimizations" from the vendors. Much of that is myths. If you have an object graph that consists of 5 relationships with, say, 5 objects on each node, and your application does "clean object-oriented", database-agnostic navigation to, say, gather data for a tabular display, you (i.e. the framework) will easily end up executing several hundreds remote SQL calls traversing the graph. Often, you can do that same with one SQL statement - in a fraction of the time. I know of no OR framework that offers a query language powerful enough to match that.

    What I am saying is - dont stop thinking. A database is still a database. I have seen too many applications that ignored this simple fact and then ended up trying to blame someone (like that "crappy persistence framework") for the bad performance.
  32. To SQL or not to SQL[ Go to top ]

    A database is still a database.

    I whole-heartedly agree with you. But who says that it has to be a relational database? And who would, when programming with "round" objects, want to fit them into "square" tables? If I had to disassemble my car each time I wanted to park it and put the parts into a shelf, it wouldn't be long until I'd look for a better way of parking it, like leaving it in one piece and driving it into a car-park. (Ok, granted, all these examples are not mine. But I think, they describe the situation we have here very nicely.)

    So, why not storing your objects into some sort of OODB? The more complex your object graph (not only composition, but lots of inheritance as well), the higher the performance gain will be...

    Cheers, Lars
  33. To OODBMS or not to OODBMS[ Go to top ]

    I should have known that this would come up next. The well known "round peg/square hole" proverb is't missing, either.
    Interestingly, there are good arguments showing that looking at OODBMSs as a technological advance from RDBMSs may be questionable altogether. The main point is that the relational approach is not tied to a certain programming model and therefore more generic.
    More practically, most OODBMSs to this day lack, among other things, a good declarative query language - which may be tied to the point above.
  34. To OODBMS or not to OODBMS[ Go to top ]

    Christian, would you care to elaborate on what "a good declarative query language" is from your point of view?

    I think that many of the query languages provided with the APIs of OODBMSs - for instance OQL, or, in this case, JDO-QL - are well suited for the task they are designed for, eg. traversing object graphs and selecting instances by the values of their members. But they are not, this should be noted, suited to be used by people still thinking in tables :)

    And if we want to design our software in an object oriented fashion, we should ask ourselves if we want a generic database technology or one that fits our design approach.

    Just my $.02,
  35. To OODBMS or not to OODBMS[ Go to top ]

    I think you should not be so short-sighted about table-thinking developers.

    Actually, O/R mapping is designed to let most developers concentrating on object-oriented models while only one developer take charge of mapping and assembling, that is to say, only one table-thinking developer.

    So, nowadays O/R mapping technology is developing to mend the gap between Objects and Relational tables, not force everyone to be thinking about tables and objects simultaneously.
  36. To SQL or not to SQL[ Go to top ]

    Some OR-Mapping products like JDX do provide optimizations which minimize the number of database calls considerably for retrieving complex object graphs with 1-1 and 1-many relationships. Other optimizations like using prepared statements and connection pooling also help a lot in improving performance. It is quite possible to get sub-optimal performance because 1) the OR-Mapping layer does not employ/offer optimized data access query patterns or 2) the application does not fully leverage the power of the OR-Mapping layer.

    -- Damodar Periwal
    Software Tree
    Simplify Data Integarion
  37. To SQL or not to SQL[ Go to top ]

    Christian Sell wrote:

    If you have an object graph that consists of 5 relationships with, say, 5 objects on each node, and your application does "clean object-oriented", database-agnostic navigation to, say, gather data for a tabular display, you (i.e. the framework) will easily end up executing several hundreds remote SQL calls traversing the graph. Often, you can do that same with one SQL statement - in a fraction of the time. I know of no OR framework that offers a query language powerful enough to match that.


    It isn't impossible for the O/R mapping layer to optimize those "hundreds" of remote SQL calls into a single call. In fact, I've been spending some time thinking of ways this can be done.

    Most of the O/R products I know of, right now, just end up fetching all of the (non-large) attributes in the object's table along with any of the object's dependents. So, instead of mapping into "hundreds" of calls, it usually results in a fairly small count. It's definitely not optimal, but it's nowhere near as bad as you make it sound.

    God bless,
    -Toby Reyelts
  38. To SQL or not to SQL[ Go to top ]

    There is simply a very tight limit to the optiizations any framework can perform when the application logic is designed in a database-agnostic way.
    Some of this can be alleviated by tweaking available framework options, but at some point you will not be able to do without a good query language, by which you can make the retreival code explicit and transfer control to the layer that really should do the optimizations (and has been for decades) - the database itself. If a framework offers a query facility that is powerful anough to match SQL (or more), good so - its going to be proprietary, however, because the standardized options (JDO-QL, EJB-QL) are still too limited (there's ODMG-OQL, which nobody implements fully, and nobody talks about anymore nowadays).
    Anyway, thinking that a framework can completely shield you from database aspects is not going to work - if your app has reasonably sophisticated data retreival requirements, that is. If all you do is load - modify - store, forget my statements.
  39. To SQL or not to SQL[ Go to top ]


    "Anyway, thinking that a framework can completely shield you from database aspects is not going to work - if your app has reasonably sophisticated data retreival requirements, that is. If all you do is load - modify - store, forget my statements."

    I agree with this statement, but I don't see why you can't have a hybrid of both. In my experience (mostly with commerce web site development), you have two ways you deal with the data.

    First, you need to modify the data. This is usually done either throught a GUI or through a batch update. Either way, blazing speed is not going to critical. If you are updating one record at a time (e.g. modifying a product's price) it's going to fast anyway. If you are updating a bunch of records in a batch, speed is usually not critical here either. In either case, you can take advantage of what CMP or O/R mapping tools take care of for you - object-relational mapping, transactions, locking, etc.

    The second data-access "mode" is querying the data. If it is a simple query, you can push this functionality off to the CMP or O/R engine, too. If you need a more complex query that simply can't be optimized with these tools, THEN go to another method - stored procedure, hand-crafted SQL, whatever. Also, when dealing with queries, you usually don't need to return complete objects, just a subset of the object's data. That is, not a collection of full blown Vendor or Product objects, just a "view" of the key pieces of these objects. Also, if this data does not have to be real-time, it can be cached to some degree or made with non-transactional, read-only queries.

    Isn't there a J2EE pattern about this - "Fast Lane Reader" or something like that. Also, if you seperate the data access layer through a something like a session facade, the client never need to know you are accessing the data in two different ways. Something like:

    ProductSessionBean.update ---> ProductEntityBean.update
    ProductSessionBean.delete ---> ProductEntityBean.delete
    ProductSessionBean.getSomeProducts ---> ProductReader.getSomeProducts

    The point is, I don't see why you can't leverage the benefits of both types of data access, depending on you needs.
  40. To SQL or not to SQL[ Go to top ]


    "The point is, I don't see why you can't leverage the benefits of both types of data access, depending on you needs. "

    I dont either ;-). Basically, that was the point of my argument all along. The only point where I might slightly disagree with you is with regard to "batch processing", as I found exactly that an environment that imposes strict performance requirements, and where, for example, the object (in-memory) transaction features of the average ORM tool are unnecessary. But this depends on your specific requirements, of course. In summary, I fully agree.
  41. Christian Sell wrote:

    Just dont expect miracles in terms of "optimizations" from the vendors. Much of that is myths. If you have an object graph that consists of 5 relationships with, say, 5 objects on each node, and your application does "clean object-oriented", database-agnostic navigation to, say, gather data for a tabular display, you (i.e. the framework) will easily end up executing several hundreds remote SQL calls traversing the graph. Often, you can do that same with one SQL statement - in a fraction of the time. I know of no OR framework that offers a query language powerful enough to match that.

    Though miracles should not be expected, it is also not realistic to ignore the optimization value that vendors provide in their products. Query optimization can be achieved in many ways:

         1. Simple queries. A simple query like &#8220;Select * from employee where salary > 6000&#8221; can be optimized by having an index on the salary field as this will be a frequently used query. This will speed up the query execution considerably. This can be handled by the vendor tools during the OR mapping by generating the required indexes.

         2. Complex queries that include navigation. These sort of operations can be optimized by:

                 a. Framing a single SQL query that navigates to all the related objects through the primary keys and retrieves them in one go. Any further navigation can be done on the retrieved set of data / objects.

                 b. Maintaining a cache of data / objects between the application and the datastore. This will sort of act like a virtual datastore and all queries can executed on the cache directly without the need for connecting to the datastore. The cache can handle the task of synchronizing the data with the actual datastore to maintain integrity.

    Though it would be difficult to provide the level of optimization that a seasoned developer who is very familiar with the application can achieve, a lot of vendors do provide a comparable, enterprise level of optimization within their query support. Some even provide ways to developers to provide hints on the optimization that will be handled manually at deployment time.

    We at ObjectFrontier provide a highly optimized and fine grained support for JDO QL as well as OQL. The major features are:

             1. Sophisticated caching mechanism that is transparent to the developers (no extra code is required to avail caching features) and all calls to the database result in retrieval of the entire object graph that is then available in the cache for future queries. The cache is totally transparent to the developers and all synchronization between the cache and the datastore is handled totally in the background.

             2. Control over the object graph that is fetched by a query through different reading modes. The fetching can be configured to use active reading (in which case all the related objects are fetched) or lazy reading (in which case only the actual objects are fetched) or a combination of active and lazy read in which some related objects can be actively read and others fetched when the need arises.

             3. Prepared statement caching through which there is no need to prepare repeated queries.

    You can check out our implementation at

    S Rajesh Babu
    ObjectFrontier Inc

  42. <Begin Quote
         1. Simple queries. A simple query like &#8220;Select * from employee where salary > 6000&#8221; can be optimized by having an index on the salary field as this will be a frequently used query. This will speed up the query execution considerably. This can be handled by the vendor tools during the OR mapping by generating the required indexes.
    End Quote >
    If only things were as simple as adding an index to speed up a query we wouldn't need full time DBA's . There's a lot that is required to make a database run optimally especially with large databases and that's the reason you have DBA's.
    I think OQL or it's variants are a poorly designed substitute for SQL. Databases are complex beasts and trying to replicate their mature functionality(transactions, concurrency, caching,SQL) with an OR mapping tool is not something I favor. Use your OR tools to manage your inserts, deletes, and simple queries. Keep business logic out of your database but do not hesitate to use the power of your database whenever required. It's beeen tried and tested and it works.
    Just because SQL/databases are not OO dosen't mean they are bad.
  43. Ravi,

    I agree that things are not so simple all the time. And I also agree that a lot is required to make a database run optimally and in now way suggest that OR mapping tools will be able to achieve the level of optimization that DBAs can achieve currently (in the future, who knows ...). And taking a stand against the entire OR mapping tools for this reason is slightly off-target. All the mature functionality like transactions, concurrency, caching (I am leaving out SQL) etc can be implemented and are being implemented in OR mapping tools with a high degree of success. Take a look at our tool to see this. And do not ignore the benefits that these tools can bring in &#8211; your entire team can focus just on the business logic instead of worry on how to manage connections, how to convert data in objects and objects into data, avoiding the pitfalls in tuning your database schema with your object model etc &#8211; they are considerable. OR mapping tools have a very legitimate and important role to play and why not play it fully by abstracting the complexities of the databases completely from the developers ?

    And for the die hard SQL people, our product, FrontierSuite, provides JCA based connectivity to procedures in the database. So you can leverage the power of the database level SQL for optimizing as well as delegating the &#8216;worrying&#8217; to the tools.

    S Rajesh Babu
    ObjectFrontier Inc
  44. Dear Suresh and everyone else pitching their respective OR tools.
    I am not against OR tools. I have used them in my projects and they are useful things to have in your projects. Connection pooling and abstracting away SQL is a nice to have. But to wish away the so called 'complexities' of databases is foolish.
    The OR tools usually bring their own complexity with them. Adding pretty interfaces or XML descriptors to manage them does not change anything. Instead of tuning databases you now spend your time tuning the OR tools.
    I would encourage all the OO developers to learn and understand relational databases. Trust me they are far less complex than the OR tools and also easier to optimize and tune than the fancy OR tools.

  45. Dear Ravi,

    I am happy that you have nothing against OR tools and in fact find them useful. Learning OR tools and their 'complexities' may involve some effort, but their entire success depends on the fact that it is easier to master this alone rather than trying to master two different beasts - Objects and Relations. Given a choice most people prefer to go with one, and that is why we have OR mapping tools that try to hide the complexities of RDBMS from the developers who can then increase their focus on the business logic of their applications.

    It is indeed difficult to wish away the complexities of the RDBMS, and that is why it is best left to the hands of the experts. And OR mappers have the required expertise to achieve this to a large extent. And I am sure that with increasing acceptance (after all, we now have a new specification called JDO to answer the needs), the day is not far off where the OR Mapping tools will be able to achieve optimisation required even in the most complex of cases due to their continued focus on making their products better.

    Just my thoughts
    S Rajesh Babu
    ObjectFrontier Inc,
    www.ObjectFrontier Inc
  46. Rajesh,

    I'm sorry, but I couldn't disagree more with your view that Java developers should leave the complexity of the RDBMS to the OR experts and the Java developers should then just master Objects and not Relations.

    You might get away with having some junior developers just focussed on the Java code. But anyone else needs the relational data model on the wall by their computer, next to their UML diagrams.

    One reason why a J2EE designer needs to understand relations, and relational structures, is to understand performance. If you want reasonable performance then you need to be thinking about which strategies to use (lazy loading, caching, short transactions to avoid blocking vs long transactions that might involve less database trips overall, the efficiency of loading several objects in one query vs the inefficiency of loading things you may not need, etc). All this relies on understanding the data-model, and how it works. Even if my *code* doesn't ever mention SQL, my design needs to reflect consideration of what the SQL will do.

    Another reason is that your client will hold you responsible for the data model your project produces. The data model may well out-live your application, and it must be clean.

    A third reason is that debugging is just not possible unless you understand what the persistence layer is doing. Even if the OR tools is completely bug-free, I still need to understand when it is doing what to understand the program's behaviour, and I cannot debug unless I understand the program's behaviour

    I agree that OR mapping tools are a good thing - anything that reduces a rather repetitive set of code improves both development and maintainability of code.

    But programming in ignorance is always dangerous.

  47. Sean,

    Atleast we agree on some issues like the need for current developers to think deeply on issues like –

    Quote β€œlazy loading, caching, short transactions to avoid blocking vs long transactions that might involve less database trips overall, the efficiency of loading several objects in one query vs the inefficiency of loading things you may not need, etcβ€? - end of quote.

    Any developer has to worry a lot about these things in the normal development environment. And without a thorough understanding of databases, this is not possible. Let us consider two important facts - #1. The middle tier programming has undeniably moved on to a OOPS paradigm and #2. RDBMS still rule the data management space and there is no alternative technology that can at present challenge their rule. And the status quo will continue for some years (or maybe decades – who knows) to come. The outcome of this is that developers will need to be masters of two beasts – Object and Relational – to be really effective in their jobs. And considering a third fact that almost all the business logic is concentrated nowadays in the middle tier, database knowledge requirements for a developer mainly focus on the optimization of the data transfer operations between the object layer and the RDBMS layer. This makes it an ideal candidate for commoditization and that is where the OR mappers / persistence tools fit in.

    I do not advocate that developers abandon database and optimization principles in their development work – instead just treat the persistence as a service that can be picked out a box and just worry about the business logic. And like any other new tool, there definitely is some effort involved in learning these new tools, but they will be more easy to master because they speak the same language – objects. And they in turn will handle the persistence for you.

    And a lot of the issues that have been listed - concurrency management, active and lazy reads, data caching, transaction management are developer controlled and customizable in a good OR mapping / persistence tool. And this will cater to varying optimization requirements of the developers/application like speed, memory limitations.

    The performance of the persistence tools, which is the only major complaint, is increasing rapidly and is already enterprise strength (our customer base proves this) and with the advent of new standards like JDO and CMP, there is a standardized way of availing and quantifying these services.

    Definitely ignorance is dangerous, but reinventing the wheel is not desirable either.

    Just my thoughts
    S Rajesh Babu
    ObjectFrontier Inc
  48. <quote
    And a lot of the issues that have been listed - concurrency management, active and lazy reads, data caching, transaction management are developer controlled and customizable in a good OR mapping / persistence tool. And this will cater to varying optimization requirements of the developers/application like speed, memory limitations.


    Definitely ignorance is dangerous, but reinventing the wheel is not desirable either.

    Hey, I don't get it. Any decent enterprise db has concurrency mgt,data caching etc. and you say that reinventing the wheel is not desirable. So what's your point?

  49. Ravi,

    The point is this:

    Every time a real application is developed, people have to design a relational model that is tuned to their object model + write code that converts their objects into relational data and vice versa + figure out solutions for managing all the issues like &#8220;concurrency management, data caching etc&#8221; in an RDBMS platform. And they have to think about these in a language that is different to the one they use for managing the business part of the application, which is Object oriented. And all this results in an INFORMAL PERSISTENCE FRAMEWORK (or by whatever name you want to call it) that is limited in its use or at the very least an object-relation data converter dependent on JDBC for almost all the projects.

    Now if you were to use a ready made solution like FrontierSuite (our product, by the way), you have an out of the box solution for all the activities related to the database aspect of a project. Cut your effort and costs and spend your time on the business rules of your application.

    And contrary to the myths that these benefits come at the cost of performance (these keep doing rounds frequently) the performance of FrontierSuite compares very favourably with JDBC based SQL code.

    S Rajesh Babu
    ObjectFrontier Inc
  50. "Definitely ignorance is dangerous, but reinventing the wheel is not desirable either. "

    Hey, I think we agree on most issues. I also think third party O/R mappers are a great idea. They mean that the developers have to *do* less, which is great, especially since the stuff they don't have to do is a tricky area full of traps and difficulties.

    It's just that I keep seeing J2EE projects hit a wall two months before they are due to go live where suddenly performance tests run with many threads and large volumes and find that there are concurrency issues to do with blocking in the database. We've solved these issues, I've learned the hard way about Sybase page-level locking and read locks (as opposed to Oracle's row-level locking and update-only locks). My pain has taught me that I need to understand the database layer, and write my OO application with the RDBMS behaviour in mind.

    So yes, using O/R mapping tools is great. But it doesn't stop us developers needing to be (as you put it) "masters of two beasts – Object and Relational – to be really effective".

  51. To SQL or not to SQL[ Go to top ]

    "Another reason is that your client will hold you responsible for the data model your project produces. The data model may well out-live your application, and it must be clean. "

    While we have to keep transactions, the underlying persistance, etc. in mind - for most projects and for the most part we need to think of more than just the data.

    Most, if not all, of the projects I do the 'data' cannot be separated from the business logic. My applications are composed of UI, business logic and persistance. The peristance is there for just that - it is not the data. Other applications don't access the data - they access the application. This includes reporting.

    As long as we continue to think of the data as 'data' then we will have trouble developing OO applications. This is why I refer to databases as persistance. Any access to the 'data' should be via the peristance structure. If it can't be done this way then what is the point?

    If you and your clients think the 'data model' should outlive the application then you just don't get it. The data model is just part of the picture.


  52. To SQL or not to SQL[ Go to top ]

    I agree with Mark : db model is just another part of an application.

    DB-centric developpers love to think that the db model IS the application, it lives forever, and object model should be generated from it.

    In reality, this is just plain wrong. As someone mentionned, OOP dominates almost every part of any modern application, and RDBMS is the ultimate frontier.

    We are currently writing the third version of our business app. And for each version, db model is different and existing data must be converted. Data lives forever, not db model.

    Take a look at the Model Driven Architecture proposed by the OMG :
    "With MDA, functionality and behavior are modeled once and only once (with uML). Mapping from a platform-independent model through a platform-specific model to the supported MDA platforms will be implemented by tools, easing the task of supporting new or different technologies."

    This suggests that db model should be derived from a more abstract object model.

    I'm sure db developpers won't like this idea. Just like
    dinosaures didn't like that big asteroide hitting the Earth !

  53. Price Of Cocobase[ Go to top ]


    Thanks for the recommendation about Scott Ambler's writings on persistence. I just read a few of the papers (for example,, which I found concisely described the benefits and approaches of tying together objects and relational databases. I also enjoyed Scott's willingness to clearly state opinions that, while controversial in some quarters, make a lot of sense to the OO practitioner. Even better, he shows us how we can back these opinions up.

    Looking forward to reading your struts book!

  54. cocobase intrusiveness[ Go to top ]

    I'm in agreement that the non-instrusiveness is vital for portability and cleanliness of code.
    I can understand the motivation that vendors might have in forcing code changes (tie-in etc.)


    1. Like you, I don't like the idea of byte code modification a la Sun JDO.

    2. I disagree with your premise that you have to expose public properties / methods in a class. Under Java 2 I believe private/protected methods can be be called from another class if you set the security manager up correctly - I've never done this personally but I believe it works (see java.lang.reflect.ReflectPermission).
    You can then just specify your mapping in some other form (xml file or whatever) - your mapping is neat and clean and you have no vendor (or application) tie in.
    I believe tools like Castor can do this.

    Last time I tries to use Cocobase, I struggled for a long time trying to get it to work in a non-instrusive way.

    Please accept my apologies if Cocobase has moved on since then, since it was a long time ago. But in our time frame we just couldn't get it to work in that way.

    I remember mentioning this at the time to a Cocobase salesman, who instead of being constructive decided to insult me personally and act in an incredibly childish manner.

    I hope things have moved on since then for Cocobase in both their persistence mechanism and the way they treat potential clients, best of luck to them...

  55. Personally, I don't understand why people get so fumed about byte code modification. I mean, who cares? Its just another step in the compile cycle. If you have some good ant scripts, you can completely hide the process.

    If we can have truely crystal clear object model, at the expense of an extra step in the compile cycle, then I'd take the crystal clear object model any day.

    Functionally, a byte-code enhanced set of classes can still be run in your standard VM (without your JDO engine), and in JDO engines of competitors. The point I am trying to make in this paragraph, is that byte code modification won't tangibly get in the way of a developer, it can only save their time. So why does byte-code enhancement bother people so much?

  56. I'll tell you two reasons that I prefer not to use an ORM that does it, others will have their own reasons.

    The first reason is that developers go to great lengths to ensure that the code is portable, thread-safe, follows proper programming practices, etc... To then introduce risk by letting an outside influence modify the code, in ways that are not immediately apparent, doesn't make good risk/reward sense to me. I'm saying "doesn't make sense" because there are other, more obvious ways of achieving the same results. I guess the same thing happens with EJB 2.0 app servers, since they are effectively generating a bunch of byte-code, but with respect to ORM frameworks, this is not neccessary and an unneccessary risk (in my opinion).

    And from a security standpoint, does this really make sense? You might, surely a reputable company would put any easter-egg type software in the product, but we are hearing about this placement more and more lately. No thanks, I'll just let the ORM framework introspect on my properties.

    Probably not the most convincing argument, but I'm sure someone else will have an opinion.

  57. I think there are two big issues

    I understand your concern that your developer must rely on an outside influence.

    But really - if I use a non-manipulating ORM do I really have any more transparency? In theory perhaps I could decompile the ORM (probably violating my licence in the process) and have a look at what's happening. But in practice they could easily obfuscate things anyway. Once you use a third party library you have to trust it, and you can't assume that you can crawl through the code checking it.

    _My_ concern is that byte code manipulation is hard to do well.

    Here I think the problem is that the JDO Reference Enhancer needs to be well written on a stable base, such that the writers of the enhancer can easily debug and test their own work. I have a few thoughts about how that could be done better, but I might try building a better Reference Enhancer first to see if my ideas work and then I'll comment further.

  58. Byte code modification makes debugging considerably more difficult. It's been my experience that it's not too difficult to find a bug in object persistence products, I don't want them injected into my bytecode.

    With respect to the original post, Sun's J2EE team has never been supportive of JDO and has pushed the idea that CMP should be the only persistence model for objects on the Java platform, which is, imho, a bad thing. I wonder if this is a strategic move to protect their vision of J2EE?
  59. I work for a JDO Vendor company called HYWY Software, Inc.(, developers of The PE:J(tm) product. We are probably the only JDO vendor doing Source-Code Enhancement.

    The source code generated, is extremely well documented with references the JDO Spec. section numbers. This helps our users look at the generated persistence code to see what is done to their sources and cross-reference the JDO spec. when required to see why a certain piece of code is generated a certain way.

    It also helps users certify sources to their clients where it is required, for eg., for large government projects, etc.

  60. The JDO 1.0 Spec does not dictate to vendors that they must use byte-code manipulation, rather it gives them the choice of that or source code processing. Gopalan is correct that HYWY's PE:J product is one of the very few JDO supporting products that uses source code processing, which as we can see from the comments in this thread is definetely preferable by most developers; for debugging among other reasons.
  61. Hi,
    byte-code enhancement can be very trasparent! Orient Technologies JDO implementation may use enhancement at run-time.

    User classes are compiled as normal and at the first use if aren't enhanced the ClassLoader make it prior to load.

    Some AppServers make enhancement at deploy step and few people know that :-)

    Luca Garulli
  62. Matthew

    byte-code enhancement is nice because you can make classes persistent even when you don't have the source code.

    From the debugging point of view: I prefer to debug my own business code, than debugging all the JDO technical code that has been added in my business code.

    Adding JDO persistence at source-code level is like the old C++ pre-processing, this is definitely not in the Java spirit.

    And this kind of pre-processing adds a new step in the development cycle while byte-code enhancement can be hidden as Luca Garelli said previously (using class loaders for instance).

    Cheers, marc
  63. Gopalan

    what happens if some of your customers modify your generated JDO source-code.
    Will you continue to support them ?

    cheers, marc
  64. Hello Marc

    As long as the modification is in conformance with the JDO Spec, PE:J's JDO runtime will support it.

    graj at hywy dot com
  65. The "Hywy PE:J" technology is "owned by Vez Inc."

    We are continuing to use and develop it.


    It is based on the HYWY 101©" Knowledge Map Knowledge Map, which we also "Own"


    All of the "I.P." is "Legally" protected from 1981 forward. The "Hywy PE:J" I.P. is Also "Protected",

    As are all the new uses of the "HYWY 101©" Technology @


    Please follow us @, or @vez Inc. on Twitter or Myspace. for updates SOON.



  66. Hello Gopalan & everyone:

    We are "Vez Inc." and we were "" and "Own" the "HYWY 101©" Knowledge Map, and have "ALL" the "I.P." from 1981 - 1998 forward. The "Late Fred Wiebe" had his I.P. "Protected for "Generation" according to "Copy Write Law" worldwide. We used it together to build the "Hywy PE:J" Productivity for JAVA Tool Suite everyone was using by 2003, before we were "Stopped" by a "theft".

    We continued to "Use" the I.P. and recently launched the new "Digital" versions of the map and other tools and devices at


    All of the "Hywy" I.P. is "PAPERED" and we WILL be "Protecting" OUR Property from "Improper" use.

    Any "Person" may "USE" any and all of the "HYWY" products, forever, by simply "paying" a one time $5.00 fee. Employees of "For Profit" Entities MUST pay the fee for you. If they do not, just pay it yourself, and you will have a job if they "Fail" to stay in "Business" for "Any" reason, forever.


    You do not have to tell us your employer will no do the "right" think. We "OWN{" all the "Source" CODE and can "SEE" all "Uses".


    Protect yourselves, fellow visionaries. It was and is the work you do that is the correct way to integrate technology, until NOW, and "Your" Talents will be "Essential" to integrating "Everything" on the "Digital Version" of the "Living" HYWY-101 All business will need your "Skills' to change from mathematically based computing logic to "Perfect Human Logic" based "Computing"and device use.

    HYWY PE:J JAVA was/is the shortcut, then and now. THIS TIME WE WILL NOT "Disappear of the Face of the Earth" We PROMISE.  Go to, print the MAP, ask a "Question" get the answer, drop the map on the floor, look again, get up off it, Higher, higher, see this "Message" idea, from many steps, dimensions above.

    You will KNOW WHAT TO DO instantly. FIND US - CONTACT us THEN.

    H & M



  67. Greg

    >>Byte code modification makes debugging considerably more difficult.

    Can you detail us why you think it makes debugging more difficult ? As far as I know line numbers and other various debug info remain the same, as JDO byte-code enhancement must respect JPDA.

    cheers, marc
  68. I do not think byte code manipulation is that big an issue. What are some big issues are the ones raised by Ward Mullins which got lost in some heated discussion.

    I do not know Sun by pulling out JDO implementation in its Forte and directing its customers to use Cocobase is indeed acknowledging Ward Mullins on the correctness of the issues raised by him.
  69. cocobase intrusiveness[ Go to top ]

    Being non-intrusive is a plus for any technology solution, but what I find curious is the notion that being non-intrusive is translated into "making any java object persistable without the object or developer knowing it is being persisted."

    Why shouldn't a spec insist that an object declare its persistability? Surely that's a key characteristic of some objects, not shared by others.

    And why should the persistence be "transparent? Shouldn't a framework that depersists and persists access the persistent attributes differently from the way a client of the object uses them (so an object can independently implement change tracking, e.g.)?

    I would say that that framework is least intrusive which makes persistence simple to accomodate, and transparent in the sense of it being obvious to the developer what is happening. Thus having to implement a marker interface, and providing persistence-special getters and setters is perfectly acceptable to me. I don't even mind if the interface goes beyond being a marker and provides means to interrogate the object/class about it's persistence preferences, and inform it about transactional events, etc. To me that's not less transparent, but more.

    Of course, one can always argue that this makes it impossible to use persistence without access to source code, but to me that's a red herring. If there were a persistence spec, and vendors followed it, it would cease to be a problem.

    Others will argue that the extra methods clutter the object, and make busy-work for developers. Fair enough, but that's why we've got advanced IDEs and other tools - to generate and manage that kind of code, and make inobtrusive (like Visual Studio's ability to intelligently collapse selected code, e.g.).

    Finally, I think aspect based programming has the potential to bring enormous clarity to this question. Persistence is a ready-made aspect of an object, and implementing it as such, solves many, if not most, of the problems mentioned in this thread.
  70. cocobase intrusiveness[ Go to top ]

    I made similar experiences with the CocoBase guys.
  71. Price Of Cocobase[ Go to top ]

    Hi Toby,

    Regards the claim that you need to use either (1) public 'bean-like' accessors or (2) byte code mangling - we do neither.

    We've taken over a project that has its own home-grown persistence layer, including custom o/r mapping. It uses reflection to directly access private fields of the data objects. When we took over the project and I inherited this framework I thought the performance of the reflection would be crippling. But actually using reflection to direcly set fields is not a big performance issues, and can be implemented quite cleanly.

    (But I'm _not_ suggesting that a project writing its own home-grown custom o/r mapping tool is a good idea.)
  72. Price Of Cocobase[ Go to top ]


    can you explain us how you can access/set private attributes using Java reflection ?

    cheers, marc
  73. Price Of Cocobase[ Go to top ]

    I don't wan't to speak for Toby, but I thought he was referring to accessing private properties using JavaBeans introspection, which is absolutely possible. Look at Jakarta commons BeanUtil or PropertyUtil classes.

  74. Chuck Cavaness wrote:

    I don't wan't to speak for Toby, but I thought he was referring to accessing private properties using JavaBeans introspection,

    Actually, I'm not the one who brought this up, though I'm quite familiar with it. For those who aren't familiar with this "loophole" in the Java security model, take a look at java.lang.reflect.AccessibleObject. It's the base class for the reflection classes, Field, Method, and Constructor. Basically, it allows you to read / write any private members of any instance of any class.

    Sean Broadley and Tim Fox suggested that I wasn't familiar with this method, because I failed to mention it. The truth is that I didn't mention it, because the use of AccessibleObject seems dangerous to me. It's a very strong, undirected violation of encapsulation. I was lax in not mentioning this in my post.

    On top of that, the use of AccessibleObject means mucking around with your SecurityManager's policy, which can be tough or impossible depending upon the environment you are in.

    If the general consensus seems to be that these aren't problems in reality, then I would probably be persuaded to change my mind.

    Also, I believe Tim said:

    1. Like you, I don't like the idea of byte code modification a la Sun JDO.

    I'm personally neither for, nor against, byte code modification. My point, though, was that byte code modification is an "intrusive" technology, in that your "business object" is modified to meet the persistence layer requirements. This also holds for any type of automatic source code modification. (Both of which are used in JDO to meet the requirement that the "business object" must implement the PersistenceCapable interface).

    God bless,
    -Toby Reyelts
  75. Toby Reyelts wrote (about setting private members via reflection, which I seemed to endorse):
    "I didn't mention it, because the use of AccessibleObject seems dangerous to me. It's a very strong, undirected violation of encapsulation...
    ...If the general consensus seems to be that these aren't problems in reality, then I would probably be persuaded to change my mind."

    I appeared to support this, but actually I'm in two minds about it, so I'd like to make my views clearer.

    I think there's a risk with transparency. Transparent persistence (which we provide, and which is JDO's aim), transparent transaction management (as provided by EJB container-managed transactions), etc. 'Transparent' here means 'done to the code by someone else' and can cover direct access to private fields, byte code manipulation, or the ejb container layering in code that does stuff between the remote interface call and the ejb bean.

    Transparency means developers need to *do* less. Doing less is good, because if I don't do something then I can't do it wrong (less coding = less bugs). But the developer must (must, must, must) *know* what the 'transparent' layer is going to do, otherwise they won't understand how objects of their class will behave. The risk is that developers need to know something about a class that is _not_expressed_ in the class's Java code.

    Direct access to private fields is a particularly dangerous one, because it means that the developers need to know that the normal Java rules _do_not_apply_ here.

    Doing less is good, but needing to know more to understand the program's behaviour is bad.

    Of course, you buy back a lot if what the developers need to know is a standardized (eg JDO, EJB).

  76. Price Of Cocobase[ Go to top ]

    Chuck Cavaness wrote:

    I wouldn't say that Cocobase is the best, but so far has been a good product. It could stand to have a better OQL facilty. TopLink's is very nice. Of course, it's been around for a long time.


    What would you say was lacking from the Cocobase query facility? What do you feel are the important aspects of a query API in an O/R product?

    Thanks and God bless,
    -Toby Reyelts
  77. Chuck Cavaness wrote:

    However, if you want a good open source ORM, take a look at Object/Relational Bridge at:


    Sorry Chuck,

    One last question. What was it that you found was lacking in Object/Relational Bridge?

    God bless,
    -Toby Reyelts
  78. Based on what I'm doing with it, not much. Let me remind anyone who hasn't been following this thread, I'm using it for an example Storefront application that I'm including in the Struts book that I'm currently writing. I'm not building a huge application, so I used a very specific set of criteria for evaluation.

    Having said all of that, it could stand a mapping tool would allow you to mapping the classes to the database. Performing these mappings are very tedious and problematic and a tool that can introspect your classes and the schema and allow you to map this way would save some time and frustration.

    Other than that, it's been nice. Of course, I'm sure there are at least a handful of other solutions in the same stage.

  79. Hello!

    I have a few basic questions. Perhaps somebody here can enlighten me.

    1. With the advent of EJB 2.0, I can have the EJB container handling entity relations. If this is done well, wouldn't it minimize the need for a stand-alone O/R mapper?

    2. If I still would like to use an O/R mapper like for example ObjectBridge, could I make it coexist with CMP (using EJB-QL etc) or would I have to choose BMP for representing entities?

    3. If I'm designing a system from scratch, why shouldn't I go for a OODBMS directly rather than mapping my bo:s to a RDBMS using an O/R mapper?

    4. If I were to use an OODBMS, could I still use the CMP 2.0 model? How would, for example, EJB-QL map to OQL?

    Best regards,
    Pelle Poluha
  80. re. 1: EJB 2.0 does relationships, but still leaves many issues untouched. One - which IMO is indispensable for a true object persistence mechanism - is inheritance & polymorphism. Theres more, read the feature lists.

    re 2: not really. A vendor might implement CMP on top of the O/R tool, but even then all you would see is CMP.

    re 3: If you have the choice, then take it. Of course, you would still thoroughly evaluate the products in question, as there is (a lot) more to a DBMS than just the double-O.

    re 4: dont think so. EJB is for relational only. JDO is a different story.

  81. JGenerator[ Go to top ]

    We've written a product called jgenerator. What's different about that is that is can code generate In-Memory, JDBC or EJB beans all using the same API's. We also have a JDO and JINI version being written, which will be releases in the next few months. You should look at it as it allows you to migrate between the different technologies as they evolve.

    There seems to be this implict assumption that you HAVE to tie your persistence interface to an implementation. This simply isn't true.

    Other comments - JDO isn't really vendor neutral as soo much of the spec is optional. Whilst we're on the subject when you get down to it neither is EJB especially when you try to handle clustering and failover.

  82. Pelle Poluha,

    1. With EJB 2.0 supporting relations (Binary Associations), you can avoid your direct dependence on OR mappers. But some application servers in turn use OR Mapping products to handle the CMP and so the dependence may not really go away, it will just be abstracted. Also, CMP is available only in application servers and in a normal J2SE environment, the only choice available for managing your persistence was OR Mapping / persistence vendors till the time JDO entered the picture recently. And most of these OR Mappers are becoming (or planning to become) JDO vendors anyway.

    2. You have both CMP and BMP options, but this depends on the integration capabilities of the OR Mappers with application servers.

    3. If the OODBMS meet your requirements, there is nothing stopping you.

    4. You can plug in any Enterprise Information System into an application server. This includes RDBMS, OODBMS, ERP, Mainframe systems etc. The requirement is that they should be JCA compliant (SPI contract) and have resource adapters that can plug into the application servers. So find an OODBMS that has a JCA resource adapter and you should be good for go (Sorry that I cannot help you out with a name).

    S Rajesh Babu
  83. Just following on from some of Chucks points. I'm using ObjectBridge (OJB) at the moment and am very happy with it.

    Its looking like OJB will move over to Jakarta soon to be a fully fledged Apache / Jakarta project, which I'm sure will help it become even better.

    Some of the stuff from the Torque project at Jakarta is very useful when working with OJB; from a simple XML document you can define your relational schema and then autogenerate (using Velocity) schema documentation, SQL DDL, the OJB repository, java beans (with no persistence code in them or public fields etc) which work great with OJB together.

    Also there are Ant tasks to create the database, dump the data & reload to XML files and so forth.

    This Torque-generation stuff should be moving to the Jakarta Commons project soon so that it can be improved and used across both the Torque and OJB projects more easily.
  84. I recently evaluated some OODBMS in the context of our new J2EE project, and especially Versant. I also evaluated some OR mapping tools. In a previous job, we wrote our own OR mapping tool (please don't do that mistake...), so I'm aware of object persistence issues.

    Unfortunatly, OODBMS technology was rejected by managers, but let me share my impressions.

    As an OO-centric developer, I must say that Versant is everything I ever dreamed of. Your object model IS your database schema, with very few limitations, especially for inheritance, polymorphism and collections. You don't have to implement or extend anything. You focus on a truly object-oriented vendor-neutral business model, that's all. Just forget about tables, columns, joins, mapping, ... The database will automatically create its internal schema for any persistent class when you insert an instance the first time.

    An object can have a Vector of objects of ANY type. That means your object can register listeners and fire events to arbitrary persistent objects implementing a given interface, just like Swing components.

    Your object can also have a Hashtable with a key or value of ANY type.

    It works within app servers or standalone applications as well. In fact, Versant was by far the easiest to integrate in the not-so-standard Websphere 4.0

    It offers a SQL-like query mecanism for simple and medium-complex queries, an a query API for very complex queries.

    Physical storage can be split up in many distributed databases when dealing with large volume of data.

    It offers a plugin for Websphere studio.

    It is very easy to use, the easiest of all the products I evaluated. It looks a lot like JDO. In fact, I think that Craig Russel previously worked for Versant.

    Lazy-loading, different lock levels, and lots more.

    Tha API is proprietary but they are working on a JDO implementation called JUDO.

    You can download the full product with a 60 days evaluation period. Installation is easy. Please give it a try before saying anything on ODBMS. Once you've touched it, you're addicted to it...

    Note that I'm not related to Versant in any way.

    Jean-Christian Gagne
  85. <quote>
    Your object model IS your database schema, with very few limitations, especially for inheritance, polymorphism and collections.

    Uh? I thought these were precisely the features that made OODBMS superior to RDBMS when accessed from an object-oriented language.


  86. >> As an OO-centric developer, I must say that Versant is
    >> everything I ever dreamed of.

    Unfortunately for others, it is a bit of a nightmare. The data tends to outlast any particular programming language or technology. Relational Databases have proved very good at being language independant and have also proved very versatile.

    Let me illustrate why OODBMS can be a bit of a trap:
    Not too long ago there was a project here developed using C++ and OODBMS (Versant, in fact). Unfortunately, the C++ developers used multiple inheritance - not their fault - it was quite a common practice in the C++ days. As a consequence, we cannot access this DB from Java because Java doesnt support multiple inheritance (nor does VB, C#, VB.NET etc etc). What happens when we want to access the same data from the sucessor to Java in 10 years from now?

    Relational databases are technology and language independant. Thats why they have been successful. O-R mapping tools will continue to be the bridge between object oriented design and relational data.

  87. Agreed.
    Talking about how OODBMS is better that RDBMS when your system is OO is one thing, but getting the enterprise to use it is another.
    Generally, you could get away with using OODB in smaller companies, or when you can start from the scratch.
    If you are building an enterprise application, chances are that the DB(s) you're interfacing with will be used by other applications (and not just reporting).
    You could say "so what? those should access the service related to the data, not the raw data in the DB". Yep, that's the theory. But this theory is quite new and the practice is very different. So whatever you write has to coexist with whatever applications are in place, which quite often means whatever DB is in place.
    Sometimes it's hard to get the enterprise to change from one RDBMS to another, and that's way easier than chaning it to OODBMS.
    The only way I can see how OODBMS could "take over" is in fact having one datastore providing both RDBMS and OODBMS interfaces (whether it's embedded O/R mapping or R/O mapping - who cares? As long as it works..). HAve to see one yet.
  88. Jean-Christophe

    Maybe you don't know LiDO from LIBeLIS. It is a JDO implementation oriented towards performance and scalability of highly transactional Java applications.

    The good thing for you is that LiDO is already covering RDBMS, binary files AND VERSANT !

    In the latest build available there is a demo showing how to take a JVI application and convert it to LiDO Versant.

    With LiDO you keep all the benefits of Versant from the developer's standpoint, but you can run your application on any RDBMS or on Versant (just changing some parameters in the a config file).

    LiDO is a complete rewriting of a JDO Versant Java interface, it does not rely on JVI, and is even faster than JVI. You can use all JDK Collections, and the App Server integration (including WebSphere 4.0.2) is done through full JCA compliance.

    I think you should try it at It answers your dream, while keeping your managers comfortable with their RDBMS choice.

    Best Regards, Eric.
  89. LiDO[ Go to top ]


    I have read through the documentation for LiDO and it seemed very promising. However, I got the impression that the current version does not support mapping to an existing DB. It only supports auto-generation of the DB schema based on your domain model.

    If this is the case, the product is not usefull for 90% of the software development going on for larger enterprises, who always have existing databases they need to have integrated.

    The documentation mentioned that they would have this support in a later version, and when that happens I will be among the first to download a trial.

    If I am wrong, please let me know.
  90. LiDO[ Go to top ]


    If you are looking for a tool that maps an object model to your existing DB, then FrontierSuite for JDO from ObjectFrontier is your best bet. It goes one step further and allows you to reverse engineer an object model from your existing DB, that can then be used for building JDO applications that will use your existing DB as the datastore.

    You can check it out at

    S Rajesh Babu
    ObjectFrontier Inc
  91. LiDO[ Go to top ]

    I know this is not the focus of the message I'm replying to, but just wanted to alert folks that Sun's Forte for 3.0 Community Edition (which is free) can generate JDO compliant Java classes from an existing relational database.
  92. LiDO[ Go to top ]

    Sun's Forte for 3.0 Community Edition (which is free)

    Oops, the product name I typed was intended to be....
    Sun's Forte for Java 3.0 Community Edition

    However, the name of this is now "Sun ONE Studio 3.0, Community Edition (still free though).
    download at:

  93. You should also add JGrinder to your list:

    Highlights include:

    - Optimistic and Pessimistic Concurrency Control
    - Nested Transaction Support
    - Transactional, Non-persistent Objects
    - Support for RDBMS, ODBMS, and File-based Persistence
    - Caching

  94. Check out Hibernate! It is in your (and mine) price range.
  95. ObjectSpark is another alternative.

    check out
  96. This is good news for iPlanet/AppServer. Eventually the CMP may show up as some JDO/EJB hyrbid or combination. Architecturally this may end up much tighter and cleaner than trying to add a JDO on top or underneath an EJB implementation. I feel pretty optimistic since JDO seemed to have a nice interface, and I'm glad Sun woke up and is putting some real resources into AppServer.

  97. I guess Sun just didn't have an industrial strength JDO 1.0 implementation ready for Forte, and they could hardly keep supporting a 0.5 implementation now.

    I'm using JDO (Kodo, from at the moment, and like it a lot, but I haven't used other ORMs much.

    One non-technical advantage of JDO is that it is a standard, and so you have a good degree of vendor independence, and may have some open source alternatives available one day.

  98. Hey Sun got over stretched. It's there own fault for comming up with too many API's to do the same thing. I count 6 persistence APIs. They should have come up with ONE persistence API and stuck to it. This certainly means that we're going to see other projects abandoned as Sun's share price continues in the doldrums and their management get a shake down.

    When I say one API I should say what the alternative is ... they should have extended the Beans specification to include a Session and Home interface with methods save, remove, reload and findBy. Something straight forward we could all trust in. That's what we did with jgenerator ( now we have API's that can handle In-Memory, JDBC, EJB, Jini (and JDO soon).

    JDO and EJB describe implementation in way too much detail. The whole point of beans and interfaces is that the implementation is independent. There were some second rate decisions made by senior management when they decided standardise the server internals along with the interfaces. Can you imagine the WC3 HTML committe comming up with detailed server internal standards. (Well - may be Microsoft would produce half decent software if they did).

  99. Oracle's J2EE Framework called Business Components for Java (BC4J) implements what -- given the comments in this thread -- appear to be quite a unique combination of allowing the developer to fully-exploit SQL, without losing the ability to encapsulate reusable business logic that should be applied to the business components when "slices" or "views" (defined by SQL queries) are updated. The idea is to define your business components (that implement your DAO for you and can optionally enforce business logic) and then use SQL to query any slice/view/join of info you need for your app at hand, with help to automatically keep these two worlds in sync.

    Curiously, a previous C++ incarnation of the BC4J framework (which we threw away in 1997 to rewrite/rearchitect in Java) actually used OQL as its query language, but our application development teams -- now over 20 of which use BC4J in their J2EE web app implementations -- complained that they couldn't get the best performance out of their apps without being able to have full control over SQL.

    A technical whitepaper that describes how this all is implemented (for anyone curious) is at:

    And a paper illustrating how to deploy BC4J-based J2EE apps to 3rd-party J2EE Containers like JBoss is at:
  100. Hi Robin,

    There are a lot of misperceptions about what should and
    should not be in an O/R mapping tool and spec, and I feel
    I must agree with your statements as being the most clear
    and accurate.

    You should check out the CocoBase Transparent Persistence
    implementation and generic facade. It actually does what
    you (and I) wish Sun would do in the spec - today...
    Thank you for your perspective, it's always nice to see
    folks trying to request simplicity and non-invasive
    designs - many engineers seem to think a 500 page spec
    that can't be read without No-Doze is a good thing, I
    generally would have to disagree with that methodology
    however :)

    CocoBase uses a very high level generic API that's
    shared source and that hides all of the uglies of doing
    persistence. Developers can develop a single app that
    will deploy standalone or against a J2EE Session bean
    by changing Connection parameters.

    It also does it without requiring bytecode manipulation
    or other changes to the object model, and it automatically
    integrates with the JTS of the application server.

    Also some developers had written about overhead being too
    high on O/R mapping, and generally if you look at most O/R
    products I personally feel this is true. CocoBase however
    consistently is within a few percentage points of hand
    coded JDBC. Customers often tell us that our O/R layer
    actually outperforms the code they replaced it with, so
    the folks who are of the perception that O/R mapping is
    slow either haven't tried CocoBase or they didn't do a
    thorough benchmarking with our assistance... Please come
    and (re)download CocoBase and try it out for yourself.
    And we aren't even talking performance gains that are
    often possible through Caching with CocoBase, we're talking
    about raw overhead...

    Also just to clarify CocoBase out of the box is 100%
    non-invasive and non-proprietary. Someone posted that
    the last time they looked at CocoBase it wasn't - but that's
    been years ago that there was an issue with requiring
    CocoBase APIs, and it was optional even then to be

    There's a new version of CocoBase (Service Release 2) that
    has a great new version of the simplified API. I would
    recommend customers come request in the next couple of
    days. It's quite amazing, and so much easier to program
    than any other system I've seen.

    Ward Mullins
    THOUGHT Inc.
  101. Hi,

    There is another open source lightweight persistance layer called Abra that we developed while working for a startup (now dead). It works with several databases - Oracle, MySql, Postgres, and DB2 (the home page is a bit out of date).

  102. WebObjects - O/R Mapping[ Go to top ]

    Good alternative to all this is EO Modler in WebObjects. :)

    And the price is nice is quite good at 699.00. :)
  103. WebObjects - O/R Mapping[ Go to top ]

    I agree with you, WebObjects is a good tool and the EOModeller an important element. The development times achievable with WO are remarkable compared with many other ways of developing Java Apps; I think theis has the stamp of Apple principles and I am glad I found it. People could do a great deal worse than looking at EOModeller and the tutorial that goes with it.
  104. JPOX (JDO 2) experience[ Go to top ]

    We've been using JPOX (open source, mature and stable JDO 2 implementation) for a while now and have been very impressed. The ability to do sophisticated object modeling with transparent persistence is saving us literally man months of development time. The JDO 2 spec is truly mature, well thought out and makes EJBs seem like a step backwards compared to JDO. I can't work out why anyone would "choose" to use EJBs - sure many are going to be "pushed" into it from on high due to strange political factors but I can't see why anyone would "choose" to use it given the option.

    We were so impressed with JPOX that we decided to add automatic generation of JDO 2 mapping files to our Javelin Java Object Modeler/Coding tool's live class diagrams. Previously we were only supporting the generation of Hibernate mapping files but found that the use of JDO via JPOX (and other implementations) is definitely a very cool and efficient O/R Mapping technology.
  105. Just to let the Forte customers know, Sun asked us to make
    everyone aware when we released our new Service Release and
    documentation for using CocoBase with Forte. Those items
    are already available. The new Service Release 2 of
    CocoBase 4.0 now includes updated Forte integrations,
    can be requested and downloaded from our website.

    Now as for some of the misconceptions I've seen posted here
    about O/R mapping in general, and CocoBase in particular. I
    can't speak to the performance issues of other O/R mapping
    tools, but I can comment on CocoBase and its performance

    Good O/R mapping is not significantly slower than jdbc, and
    with a good implementation such as CocoBase it is often
    faster! CocoBase easily runs within the margin of
    error (+/- 1-3%) on most computer systems over hand coded
    jdbc, and when used in certain circumstances including with
    EJB Entities, runs significantly faster than the standard
    hand coded JDBC BMP bean typically written. There are a lot
    of technical and design reasons for this, but if you want
    to try an O/R mapping tool that is really optimized for low
    overhead even without caching, give CocoBase a try. You
    might be pleasantly suprised at the results. And developers
    also obtain a lot easier to use programming model, better
    applications with easier maintenance, and the ability to
    tune applications and SQL without rebuild or recompile!

    Now as for pricing, CocoBase is priced per developer $6000,
    but that includes extensive support & free product upgrades.
    Considering that includes unlimited email support, which
    many support organizations charge per-incident, this is a
    huge value!

    The price is also for a single developer, and doesn't
    account for volume discounts, or 'indirect usage' pricing,
    which we generally give for developers who aren't directly
    using the tool, but who are still calling it indirectly
    in library/service form. It's important to note that what
    CocoBase delivers in product, documentation, integration
    and support is a VERY high value proposition. It isn't
    just the cost that must be assessed, but also the value.
    One recent deployed customer estimated that we saved their
    project over 3 million dollars on development and deployment
    costs, that's where $6000 a developer seems inexpensive...

    Also for deployment, CocoBase doesn't have any server or
    runtime fees like many other O/R mapping tools, so it tends
    to be much more affordable for real world development and
    deployment. And for this price developers are getting
    extensive integrations with other tools and technologies
    and a mature product that has been shipping for 6 years!
    CocoBase was the first Java O/R tool, was written 100% in
    and for Java, and is the veteran product in Java O/R

    We appreciate everyone's interest in this topic, and hope
    that developers will give CocoBase a spin and see why our
    product so different from other O/R solutions... It's nice
    to see so much interest in O/R mapping, and it's great to
    see people so passionate about the topic.

    Have fun - and happy persisting! :)

    Ward Mullins
    THOUGHT Inc.
  106. I like the O/R tools and the choices we have. This is one great thing about Java. With .Net you get to do it one way whether you like it or not. Ok, maybe 2 or 3.

    Anyway, for the most part, performance issues usually are architectual/deployment/platform problems and less the development tool(s).