Discussions

News: Cocobase Releases New O/R mapper with JMS-based Caching

  1. Thought Inc. today announced a new version of CocoBase Enterprise O/R. In the new version, CocoBase has expanded available caching options with an implementation of a JMS based distributed cache that can be used with clustering. It is based on the current JMS specification and integrates with third party JMS Messaging software.

    Check out Cocobase.

    Threaded Messages (57)

  2. $6000 per developer is very steep compared to the $2000-$2500 per developer that is typical for other O/R mapping products.
  3. IMHO if $6k/developer gets the job done, it's a bargain. At least in the US for your typical IT shop, where good developers cost $100k/year each. The real question is: How do they stay in business just selling development licenses (since runtime is free)? Particularly when it seems that most developers would rather write their own custom O/R implementation for each and every project ;-)

    Peace,

    Cameron Purdy
    Tangosol, Inc.
  4. Toplink[ Go to top ]

    Does anyone know what kind of licensing and how much TopLink costs, now that Oracle has aquired it. Couldn't find much info on Oracles website.

    /Peter
  5. Toplink[ Go to top ]

    Peter

    the TopLink Mapping Workbench is now free of charge (Used to be $3000 from WebGain) and will eventually be bundled into JDeveloper. At the moment you still need to download both products from otn.oracle.com

    the copmbination of Jdeveloper & the mapping workbench is a compelling message

    Ill check the list price of TopLink runtime, but its on the Oracle price list so your Oracle SSR can tell you directly or mail me at phil dot mclaughlin at oracle dot com and ill find someone to help you

    regards
    Phil..
  6. Toplink[ Go to top ]

    Peter,

    >Does anyone know what kind of licensing and how much
    >TopLink costs, now that Oracle has aquired it. Couldn't
    >find much info on Oracles website.

     I was briefed on Toplink by their product manager a couple of days ago. Licensing will still be on a per CPU basis at the same price they had with Webgain (I think it was 6K/license)? However, there is no more development license.

     However, if you buy a license of Oracle9iAS you will get deployment license of TopLink included for free.
  7. Toplink[ Go to top ]

    Saw something today you may want to take a look at instead of Toplink. Check out www.sdacorp.com.

  8. "The real question is: How do they stay in business just selling development licenses (since runtime is free)? Particularly when it seems that most developers would rather write their own custom O/R implementation for each and every project ;-)" - Cameron

    Unfortunately in our industry though most projects don't see the light of day. ;-) If companies didn't charge for developer seats they would be out of pocket big time on 80% of projects they get involved with.

    The only good way to ensure you get paid is to sell consulting or development licenses. Waiting for a project to go into production before selling a license... you might as well go to Vegas with your money - better odds of getting paid. :)

    Greg
  9. Assuming you purchase a server with good CMP 2.0 implementation, what could be a good reason to fork out for an O/R mapping tool ?


    1) The Mapping GUI ?
     That is gradually being addressed by better IDE's
     (JDeveloper/Togther/etc)

    2) More fine-grained control?
     I have only tried the Weblogic 7.0 impl, but it seems to offer decent amount of control.


    3) Distributed Caching?
    Once JSR 107 is part of J2EE we should have an integrated standard entity bean cache.


    CMP 2.0 is a standard, there are a number of decent implementations, your code is portable, and there is multi vendor tool support.

    The main difficulty we have had with 2.0 entity beans is the fact that even a local interface call has to go through a fairly heavy interception layer, and the performance for using simple getters/setters for a large set of beans was poor.


    Given good tool support, integrated standards based distributed caching, and improved implementations,
    we are content.

    EJB 3.0 perhaps ?

    Death to the proprietary - Long live the standard !

    Cheers

    Zohar Melamed
  10. Hi Zohar,

    Zohar: "Assuming you purchase a server with good CMP 2.0 implementation, what could be a good reason to fork out for an O/R mapping tool?"

    First, I should state that I don't work for Cocobase or specifically endorse their products. I have spent some time researching Cocobase and other solutions, primarily as potential complementary products to our J2EE clustered caching software, but I've never built an application using Cocobase.

    Your assumption aside, you typically purchase a tool because it makes your life easier. You can probably determine a function that would describe the decision by defining cost of implementation, risk of rolling your own vs. buying a tool, cost of maintaining your own code vs. cost of paying for product maintenance, etc.

    Regarding Cocobase, it is more than "a tool". It has a substantial runtime that is deployed to the application server to actually do the work of O/R mapping.

    Back to your assumption, if you get everything that you need with the application server, and the previously described function doesn't show that you are going to save money or hit your schedule faster (or whatever is "valuable" to your company), then it would not make sense to buy the software. (Similar choices were made five years ago with application servers. I note that you are not suggesting that you roll your own there.)

    Regarding the new Cocobase caching itself, Cocobase has always had a published API that caching products could plug into. I don't personally know if this JMS implementation is based on that API or not. However, since the source is available, or will be, that should be easy to tell. The only other information I have on the new caching is from a J2EE developer who sent me the following: "I just downloaded the latest release (it was supposed to have caching sources included, but it didn't), and it looks like a very simplistic addition - it posts serialized objects onto the topic for update, insert and delete events. Cache itself uses soft references." FWIW.

    Zohar: "Distributed Caching? Once JSR 107 is part of J2EE we should have an integrated standard entity bean cache."

    A couple of important points on JSR 107:

    1. There is no allowance in the JSR for transactional consistency. In fact, the JSR states explicitly that it cannot be used for transactionally consistent data. Therefore using the JCache API for caching EJBs would be a poor choice if data integrity were a goal. Note that no actual recommendations have come out of the JSR, so the final spec may support transactional consistency. (Hopefully it will.)

    2. Unfortunately, the JSR is in semi-permament limbo according to the primary members of the expert group. If you note the schedule, the JSR should have been completed over a year ago, but has had no published activity in the last year and a half. This is a bummer for companies that would like to use a standardized caching API, and it's a bummer for companies that would like to add the JCache API to their caching products.

    If you are interested in seeing an early implementation of JSR 107, you can look at the Oracle product that the spec was derived from (OCS4J?), or look at SpiritCache from the JMS vendor SpiritSoft, which is based on an eary version of the proposed JCache API.

    If you are interested in transactional consistency, you can look at any of the OODBMS-cum-J2EE-caching vendors such as Gemstone, ODI/Excelon, Versant, etc. Also interesting is TimesTen FrontTier. BTW - if you don't like spending $5k on a tool, then prepare yourself for sticker shock with this particular list.

    If you are looking for the highest reliability (fault tolerant, failover, failback, no single-point-of-failure, automatic clustered balancing, etc.), easiest to use J2EE caching product with a variety of options -- including replicated (both synchronized and optimistic replication), distributed (partitioned, with or without redundancy), and transactional (JTA) -- then check out Tangosol Coherence. As an added bonus, I think that you will find its scalable performance is well beyond the other options on the market, and its built in pure Java.

    Peace,

    Cameron Purdy
    Tangosol, Inc.
  11. Once JSR 107 is part of J2EE we should have an integrated standard entity bean cache.


    Great! Yet another layer of complexity. I personally do not like the idea that Application Servers are duplicating the features that have been in RDBMSs for years. What I mean is locking, caching, aggregates, concurrency control, query language, etc.

    Recently I started to look at these Java persistent layers. Most of them try to protect application developers from the complexities of underlaying databases. I hate the idea that we have these objects, but we don't have to know how they are stored or where their data comes in. Yes, it might help in first place, but when the problems arise or you have to optimize bottlenecks then you are in deep trouble (do we agree on this?). OR-tool provider then offers you to implement caching layer. Yes it might solve some of the problems, but there is a great chance that it comes with other problems.

    Today I'm going to try Hibernate (http://hibernate.sf.net/), 'cause it seems pretty interesting to me. I can't right now say what I think about it (I don't like it's query language). One interesting approach is SQL2Java (http://sql2java.sourceforge.net/), but code generation is not what I'm looking for. SqlClass (http://www.quiotix.com/opensource/sqlclass/) seems also great tool that helps you to keep your queries organized in one place. It might impose some problems when there are several users writing queries as it only has one file for the whole database, i think.

    Microsoft has never been very keen to build object persistent layers. Why's that?

    This message was not very well thought out, but maybe we can start conversation based on these thoughts?
  12. Aapo

    One of the key benefits of O/R mapping is the isolation of the mechanics of data storage & retrieval from the detail of implementation in the Java code. The most significant benefits are faster application development, runtime performace enhancement and ease of future maintenance (very significant ).

    Consider this case. Developer X writes BMP EJB and all the the DB access logic is hard coded into the Java via JDBC (and this is by far the most common approach actually in use.

    This approach is a) slow to code b) brittle because a change in the DBMS schema - which is typically not controlled by the Java team - causes the developer to have to alter the entire code base or worse Developer X has left the project / company and someone else has the unhappy task.

    This is difficult, time consuming and expensive.

    Developers come in different degress of experience. The rare ones understand both good Java and highly efficient SQL and can code well. Some understand SQL but not Java or OO (a lot of JDBC code looks suspiciously like COBOL/VB ;-) and some have Java skills but dont appreciate the significance of running 7 way joins on million row tables (the JDBC is easy enough to write !)

    Tools like Cocobase but particulalry TopLink speed up development by allowing declarative desciption of the mapping (TopLink has a GUI tool)

    The mapping information which is the most likely to change (perhaps in response to a performance tuning issue) is held in XML and does not impact either the Java or DB schema.

    The DBA is free to optimise the DB, the Java developer can concentrate on Java. It is naive to assume that the Java developer can be ignorant of the data store becuase there are issues to be considered but this approach minimises the brittleness of the application code.

    As for replication of functionality you have a good point. Whether of not to use a CMP engine is a significant decision. However using Toplink means that you have architectural choices.

    regards

    Phil.
  13. I fully agree Aapo. OR tools are a waste of time and money. What is so difficult about using JDBC, SQL and transactions?
  14. "I fully agree Aapo. OR tools are a waste of time and money. What is so difficult about using JDBC, SQL and transactions?" - Race Condition

    All depends on the complexity of your data model, the ability of your developers, and deployment date.

    I always find I end up writing my own mapping layer when I do it the hard way. Except my mapping layer doesn't come with a good UI, tested libraries, documentation, and product support.

    If your able to tackle the problem without using an O/R mapper all the best to you.

    Right now I am contemplating how to map over 4000 database attributes to business objects. :) Not sure if I want to manage all the SQL for that kind of domain myself.

    Greg
  15. Why pay for anything if you find one of the comparable opensource project out there such as apache's OJB? OJB has all of the features I want.
  16. OJB is a nice small product and they recently joined Apache, it seems to be an active project. Unfortunately they rely on the dead ODBMG standard. They announce some kind of JDO support, but I'm afraid it won't happen before a long time.
    The same for hibernate and Castor, all of them are nice product (no more activity on Castor since a long time now).

    There is a new OSS O/R mapping tool XORM.
    The good news: it is JDO compliant.
    The bad ones: it is still very limited in terms of mapping features and it is not really JDO compliant (they use Proxies).

    OSS products are a good bargain if you don't have hard QoS constraints. They generally focus on features rather than on robustness, stability, scalability, but you can still develop yourself all these features if you want (but just compare the time spent with the cost of a supported commercial product).

    just my 2cts

    cheers, mark
  17. <Quote>
    OJB is a nice small product and they recently joined Apache, it seems to be an active project. Unfortunately they rely on the dead ODBMG standard. They announce some kind of JDO support, but I'm afraid it won't happen before a long time
    </Quote>
    Marc, You seem to be indulging in some goold old fashioned FUD Here.
    What's wrong with the 'dead' ODMG standard? If it's dead then why are people developing based on the standard ? How is JDO superior to ODMG anyway?

    OJB does support JDO right now(limited) and they are planning to be JDO compliant in version 2.0 which is sometime away.
    Why is it so important for an O-R tool to be buzzword compliant?

    Regards
    Ravi
  18. Ravi>>>What's wrong with the 'dead' ODMG standard?

    Eric>>>Nothing's wrong but the fact it's dead !


    If it's dead then why are people developing based on the standard ? How is JDO superior to ODMG anyway?

    Eric>>>Ask them. Maybe just because they try to reuse old code now available for free.


    OJB does support JDO right now(limited) and they are planning to be JDO compliant in version 2.0 which is sometime away.
    Why is it so important for an O-R tool to be buzzword compliant?

    Eric>>>Ravi, I don't understand your logic.
    If JDO is a buzzword, why OJB is announcing future JDO compliance ??? Why dozens of vendors already have an offer ? If you think it's a buzzword then visit www.jdocentral.com and see how active this community is.

    cheers, mark
  19. <quote>
    Nothing's wrong but the fact it's dead !
    </quote>
    Marc repeating the fact that it's dead dosen't make it dead.

    <quote>
    If JDO is a buzzword, why OJB is announcing future JDO compliance ??? Why dozens of vendors already have an offer ?
    </quote>
    Vendors will have it on offer because they need their products to be buzzword compliant. They could care less about the technical merits of ODMG v/s JDO or anything technical for that matter.

    <quote>
    JDO is the standard for persistence that was awaited by object programmers for more ten years.
    </quote>
    That's funny. I have been using persistence mechanisms for years and JDO is hardly a silver bullet. It has it's merits but it's nothing revolutionary. It's just yet another away of doing things with the standard tag attached to it.
  20. Ravi: Marc repeating the fact that it's dead dosen't make it dead.
    Mark>>>Do you have signs of any activity within ODMG ?
    ODMG is dead, that's just a fact ! Just ask ODMG members if you can find them.
    ODMG was mostly a standard for ODBMS, even if at the end ODMG stands for "Object Data Mngt Group" instead of "Object Database Mngt Group" (initial meaning).
    ODMG has never been implemented by ODBMS vendors.

    Ravi: Vendors will have it on offer because they need their products to be buzzword compliant.
    Mark>>>Cool, persistence vendors get funds from VCs just to be buzzwords compliant ??? Ravi, We are no more in 1998-2000 !

    Ravi:It has it's merits but it's nothing revolutionary.
    Mark>>>Agreed !!!!!


    Ravi: It's just yet another away of doing things with the standard tag attached to it.
    Mark>>>That's THE point. JDO is "just" a standard.

    cheers, mark
  21. Why is it so important for an O-R tool to be buzzword compliant?


    Eric>>>Buzzword compliance is not necessary, standard compliance is.
    JDO is the standard for persistence that was awaited by object programmers for more ten years. It is not just an O/R mapping tool standard, it a general persistence standard for Java. It has been designed by a lot of experts from RDBMS, oDBMS, mapping tools companies. It also takes the best from older approach like ODMG, but focused on Java. Some vendors (mostly Solar metric and Libelis) proposes robuts implementations with real references, very active forums and so on. Adn a lot of new players are entering the market (JRelay, SignSoft, Hemisphere...).

    cheers, mark
  22. Hi Mark,

    I think you have missed out FrontierSuite for JDO in your list of JDO implementations. We provide a very robust implementation of the JDO spec and in fact, provide a complete development environment for building JDO applications as such.

    And we also agree with the need for compliance with the JDO spec, which is a long awaited generic persistence framework. While we have had a OR Mapping persistence solution for some years now, it has been proprietary and then when J2EE/EJB came along, we have provides a complete CMP engine for Entity beans. And with the release of the JDO spec, we have whole heartedly pushed our support for that as the days of properietary solutions may not last very long. And JDO is not just a buzz word. It represents an evolutionary stage in the growth of software industry. What I was trying to say was that we fully support open standards. Period.

    regards
    S Rajesh Babu
    ObjectFrontier Inc
    www.ObjectFrontier.com
  23. Rajesh: I think you have missed out FrontierSuite for JDO in your list of JDO implementations.

    Mark>>>This is just because I've seen your post before writing mine ;-)
    ObjectFrontier seems to be a new player in the JDO arena with a lot of good ideas, but your forum activity seems to be much less important than on public solar metric and Libelis sites.

    cheers, mark

  24. Hi Marc (got it right this time),

    <quote> Marc>>
    ObjectFrontier seems to be a new player in the JDO arena with a lot of good ideas, but your forum activity seems to be much less important than on public solar metric and Libelis sites.
    <quote>

    Agreed that we are relatively new in the JDO arena, but if you look the the features, you will find that ours is a very comprehensive solution. And with our upcoming release of the next version (within the next two weeks), our implementation will be really cool. And we have a lot in store, especially on the caching side in the next quarter.

    As for the forums, I agree that it is not as active as the ones you mentioned, but our interaction with our customers is not only through forums and we handle a large volume through email, voice calls, on site presence etc. So what you see on the forums in just a portion of these interactions.

    I am really heartened by your vocal support for this new spec. Agree with you totally on the 'buzzword' topic. It is but natural to see a lot inertia and 'What I have is best and comfortable' attitude and it will take some time for total acceptance. The thing most people do is to compare the strengths of the existing technologies with the weaknesses of the new ones - which is going to be biased.

    If JDO is a 'buzzword which vendors just want to use for marketing to suit their big bad commerical goals ...', then it will die its own death, for sure. And there were similar cries when J2EE came around.

    If we can seperate the hype from substance, JDO is a very solid standard conceptually, still has distance to go in terms of the scope of the spec and the implementations capabilities. To look at the interest in this, people can just take a peek at www.JDOCentral.com.

    regards
    S Rajesh Babu
    ObjectFrontier Inc
    www.ObjectFrontier.com
  25. Marc > It also takes the best from older approach like
    Marc > ODMG, but focused on Java. Some vendors (mostly
    Marc > Solar metric and Libelis) proposes robuts
    Marc > implementations with real references, very active
    Marc > forums and so on. Adn a lot of new players are
    Marc > entering the market (JRelay, SignSoft,
    Marc > Hemisphere...).

    I can only speak for Signsoft here, but does this list

    http://www.signsoft.com/en/intellibo/references.jsp

    looks like a newcomer or non-robust product? By the way, I cannot find any customer list on solarmetric.com. I also think that

    http://www.libelis.com/inner-index.jsp?next=company.html#investors

    (read under "Investors") doesn't look really good. You should change this, Eric, it looks too pessimistic.
  26. other O/R open source products?[ Go to top ]

    Hi,
    can you tell a good O/R open source products?
    How about Cayenne?
    Someone used Jakarta-OJB in a big project? Or any other free open-source O/R persistence layers?

    thanks,

    Marian Simpetru
  27. other O/R open source products?[ Go to top ]

    =============================================
    Repost from comp.lang.java.databases
    =============================================

    From: Anony (anon_ymous at notahost dot org)
    Subject: Java, O-R mapping, open source projects
    Newsgroups: comp.databases, comp.lang.java.databases
    View: (This is the only article in this thread) | Original Format
    Date: 2002-07-28 10:11:03 PST
     

    Open source Object-relational mapping tools for Java:


    ObjectBridge
    http://jakarta.apache.org/ojb/

    http://sql2java.sourceforge.net/

    http://jrf.sourceforge.net/

    http://jgrinder.sourceforge.net/

    http://hibernate.sourceforge.net/

    Castor JDO
    http://www.castor.org/

    JORM
    http://www.objectweb.org/jorm/index.html

    JBoss JAWS project (JBossCMP)
    http://www.jboss.org/developers/projects/jboss/jaws.jsp

    http://objectstyle.org/cayenne/

    Jakarta Torque
    http://jakarta.apache.org/turbine/torque/

    http://www.simpleorm.org/

    http://databind.sourceforge.net/

    See also:

    JDO
    http://www.jdocentral.com/

    http://www.objectmatter.com/vbsf/docs/maptool/ormapping.html

    http://www.ambysoft.com/mappingObjects.html

    http://www.ambysoft.com/persistenceLayer.html

    http://www.theserverside.com/home/thread.jsp?thread_id=14314

    Java DAO (Data Access Objects)
    http://sourceforge.net/projects/basejdao
  28. other O/R open source products?[ Go to top ]

    How about TJDO? http://tjdo.sourceforge.net/

    Marc
  29. I've been using OJB to test it out and so far it has worked very well for me (testing with inheritence hierarchies and m:n relationships). OJB has a lot of momentum, and being from Apache it has very high quality. Documentation is not too bad. As for the JDO compliance, I'd love that but I find that temporarily using ODMG isn't a big deal. The two are pretty similar.

    Michael
  30. "OJB is a nice small product and they recently joined Apache, it seems to be an active project."

    Small compared to what? Our distribution is about 7MB.
    The framework consists of about 600 classes and a regression testsuite with about 200 classes.

    In fact we are an active project with 1 - 2 public releases per month!

    "Unfortunately they rely on the dead ODBMG standard."
    Not true. OJB provides multiple user APIs. A kernel API (called PersistenceBroker API), an ODMG 3.0 compliant API and a JDO 1.0 compliant API (this not yet feature complete).
    Do know of any commercial O/R with such a rich API set?

    When OJB started, JDO was not yet specified. thus we started with an ODMG interface first.
    In what sense is ODMG dead? What features of the JDO API do you miss in ODMG?

    Supporting ODMG is extremely useful for projects that need integration with OODBMS and RDBMS!

    "They announce some kind of JDO support, but I'm afraid it won't happen before a long time."

    Wake up! It's already there! We even have a complete JDO tutorial that demonstrates how to use the OJB JDO implementation!

    "OSS products are a good bargain if you don't have hard QoS constraints. They generally focus on features rather than on robustness, stability, scalability,"

    Not true for OJB!
    We have a regression test suite with more than 220 TestCases . We are even shipping this regression testsuite with our binary and source distribution to allow users to check if there could be any problems with their target database of choice.

    We also provide a Performance Test suite that allows to evaluate if OJB provides sufficient performance in high load scnenarios.

    Do you know of any commercial product that publish their regression testbed and their performance tests?
    We are realling taking QA serious. OJB is meant for mission critical applications, so users must be provided with means to ensure quality!

    OJB ships with a highly scalabale client/server architecture. It's possible to run an OJB clusterered server on on multiple VM's on multiple physical. We also have a distributed cache...

    "but you can still develop yourself all these features if you want (but just compare the time spent with the cost of a supported commercial product)."

    Not necessary with OJB...
    I give a short example: In my company we have been using TopLink. For a mid-range project we had to pay about 100.000 EUR (developer and runtime licenses).

    Since half a year we are using OJB for all new projects. Works great even for mission critical apps! No problems wrt. to QoS aspects (robustness, stability, scalability)!

    And it safes us several 10.000 EUR per project!

  31. Marc,

    It does look like you're spreading some fud here yourself about OJB and ODMG. It seems like you're just spitting out about a product you don't know all that much about. I've been working with OJB for about 5-6 months now, an it seems to be a good bit better than some of the commercial ORM products I've worked with. Sure, the JDO feature set is complete, but JDO will be there. On top of which, if the JDO implementation is anywhere near as good as the Persistence Broker or ODMG implementations, it should be _excellent_.

    Side question: Aren't a good number of the people from the JDO spec the same people who were on the ODMG?

    The OJB developers seem to be excellent developers, with
    good attention to detail, good testing and regression (better than the ones I do), good performance testing, quality, robust, yadda. So far we've tested OJB on SQL Server, Oracle, Postgresql, and Sybase without any problems. A couple of bugs here and there, but they got fixed quick-fast.

    Lastly Thomas Mahler and some of the other developers seem to be extremely knowledgeable in this area.

    Maybe you should really try this stuff before you make apparently mis-informed comments. Maybe you tried OJB back when it was in version .80 or something. But it just keeps getting better. Plus, I've learned some really cool stuff looking through their source (although that's not really germaine here).

    -Newt
  32. oops I mean "Sure, the JDO feature set is [NOT] complete"
  33. Thomas,

    Away from talking about which support what, I think one of the important factors by which you choose to go with an O/R engine or not, is whether you can tolerate the performance overhead on using plain JDBC calls (in addition of course to the level of feature support and the efficiency of runtime implementation of that engine).

    My question here is rather specific, I'm interested in using one of the open source O/R implementations, and the choice is finally settled on two products which are OJB and Hibernate. The feature support look pretty similar (this does not cover the architecture), and the important question left is performance. The folks at Hibernate.org talk about using runtime reflection rather than generated-code solutions, upon which (among other things) they claim that it's "an order of magnitude faster than another very well-known opensource solution" (we all know who they are talking about).

    What do you have to say about that? Are there any benchmarks available out there?
  34. Hi,

    OJB provides multiple access strategies:
    - JavaBeans compliant (allows to work with BeanInfo classes)
    - A Reflection based approach that cooperates with access control managers
    - A highspeed Reflection strategy that does not care about access control managers and thus gains maximum speed.

    We have a complete performance testsuite shipping with our product. so everyone is invited to test if OJB provides sufficient power in a given environment.
    (see http://jakarta.apache.org/ojb/performance.html for details)

    I read on the Hibernate page that their solution has a performance impact of about 5 - 15% as compared with native JDBC.
    The OJB figures are quite similar.

    But those figures depend much on the target database.
    With the fast HSQLDB the OJB app runs even faster than the equivalent JDBC app!
    Thus I don't think that it is adequate to make a decision based on figures someone publishes on the web.
    It's better to check performance in your target environment. OJB gives you everything to try out performance and compatibility issues.
  35. Thank you Thomas for clearing this up.

    You are right, the decision shouldn't be taken wholly depending on what somebody wrote somewhere on the net, but in the same time, it is unwise to ignore it.

    Anyway, I'll be trying your testsuite sometime later on my own environment, but for the time being, let's pick up your own results posted on your site, keeping in mind this:

    <q n="Thomas">
    I read on the Hibernate page that their solution has a performance impact of about 5 - 15% as compared with native JDBC. The OJB figures are quite similar.
    </q>

    and this:

    <q n="Thomas">
    But those figures depend much on the target database. With the fast HSQLDB the OJB app runs even faster than the equivalent JDBC app!
    </q>

    Now the results posted are for the fast HSQL database, which according to you should be better than those for straight JDBC. A gander on the results reveals that none of the OJB tests out perform their JDBC counterparts (actually none of them fall in the 5-15% range pointed out).

    I don't know how we get the [ojb] Time: 54,151 & [jdbc] Time: 50,568 results from. Simple accumulation of the spans says that:

    for OJB tests we have: 53,288 ms and
    for JDBC tests we have: 33,139 ms.

    If I'm not missing something here, this means that we have over 60% performance degradation when using OJB over JDBC, and that's supposed to be on a fast database. I wonder how things will look when you use PostgresSQL!

    I think that 60% is far bad to outweigh by any of the benefits I can gain from using an O/R framework.
  36. Hello,

    By defenition, OR mapper is about providing access to relational database by generating sql (code generator or run-time building). Comparison with non-sql access is another business. Thus, OR mapper can NOT be faster than plain jdbc-sql, (unless it is BAD sql).

    Now, OR-mapper defenitely "degradate" speed, but in what extend? Having database in separate machine even simplest request (select sysdate from dual) requires 1-10 ms. Real average requests will be returned within 10-100 ms plus time for delivery of next getFetchSize() rows.

    In comparison, in pre-generated code, framework will take maybe 2-20 intermediate methods calls with, may be, 1-10 ns time price. Real-time reflection and metadata discovery can take longer (I have no statistics on this).

    My point is that any speed claims must be backed by raw data. In this case data must clearly show framework time, jdbc and network time, database time, datatransfer time, resultset-to-java object transfer time (it is often done by framework).

    From OJB statistics it is not very clear if framework was involved in every 10000 queries or it was preparedstatement prepared only once by framework. Frameowrk speed can be miscalculated in 10000 times without this info!

    AlexV.
  37. <quote>By defenition, OR mapper is about providing access to relational database by generating sql (code generator or run-time building). Comparison with non-sql access is another business. Thus, OR mapper can NOT be faster than plain jdbc-sql, (unless it is BAD sql).
    </quote>

    It can! In certain scenarios an O/R layer can take advantage of caching. Caching can improve performance by orders of magnitude (as no db access is involved).
    It is very easy to write a benchmark that will demonstrate that a caching O/R layer is much faster than native JDBC!
    That's why I'm always repeating: *Never trust a benchmark!* Perform your own test to see if your application scenarios are performing well!

    As mentioned on our http://jakarta.apache.org/ojb/performance.html page:
    "
    "There is nothing like a free lunch."
    (North American proverb)

    Object/relational mapping tools hide the details of relational databases from the application developer. The developer can concentrate on implementing business logic and is liberated from caring about RDBMS related coding with JDBC and SQL.

    O/R mapping tools allow to separate business logic from RDBMS access by forming an additional software layer between business logic and RDBMS. Introducing new software layers always eats up additional computing resources.
    In short: the price for using O/R tools is performance.

    Software architects have to take in account this tradeoff between programming comfort and performance to decide if it is appropiate to use an O/R tool for a specific software system.

    This document describes the OJB Performance TestSuite. This TestSuite allows to compare OJB against native JDBC programming against your RDBMS of choice.

    Interpreting the result of these benchmarks carefully will help to decide whether using OJB is viable for specific application scenarios or if native JDBC programming should be used for performance reasons.
    "

    The OJB team is not trying to sell anything. We give you a framework for free. We also give you tools at hand that will help to decide if it's appropiate to use our framework or not.




  38. Hello Thomas,

    I completely agree with most of your comments.
    Persistence frameworks in general have negligible
    degradating effect on speed, bring additional
    services (like caching) and seriously facilate
    coding.

    AlexV.

     
  39. On a simple object model (or better a relational model expressed in Java) with a simple transactional model, a JDBC expert can produce better code than any O/R mapping tool, if he accepts to spend a lot of time for that where the tool can deal with it in just few minutes.

    Any O/R mapping tool comes with a lot of performance and tuning features, such as default support for batch of statements, pool or prepared statements, client cache and much more. A good programmer can do the same, but with a huge cost.

    But now, what if the Object and Transactional models are more complex. I'm very doubtful that a reasonable programmer is willing to spend month to try to achieve the same performance that any O/R mapping tool will provide in just few minutes.

    O/R mapping tools are not designed for trivial applications, just accessing few tables. Most people comparing O/R tools with manual JDBC simply have not enough time to build a complex testcase to check this.

    cheers, marc
  40. As you can read in disclaimer of our performnce test page:
    "important note: this document is not finished yet."

    The figures you quote are very old and are just an example to show how the output of the suite will look like.
    Starting a debate on these figure won't be helpful.
    Only so much: as HSQLB is an inmemory db it is *very* fast. Thus the *percentual* overhead for OJB (or any other O/R) is much higher as for a typical out of process database.
     
  41. It seems from what Thomas said that OJB also relies on runtime reflection, so maybe Hibernates statements is rather about Castor (the other well-know OSS O/R tool) ?

    The hibernate claim seems strange, because in another server side hot discussion, there is a performance comparison between two OSS EJB servers (JBoss and Jonas). The Jonas team said they are faster because they use code-generation instead of reflection.

    Best regards, marc
  42. Hi marc,

    As an FYI, reflection-based performance in general is very competitive as long as you don't repeat the lookup itself.

    You can see the improvement if you compare the last couple of Sun JVM versions. (Hotspot.)

    Peace,

    Cameron Purdy
    Tangosol, Inc.
  43. Let me give you a tip:

    http://www.theserverside.com/home/thread.jsp?thread_id=14314#52964

    Probably somebody from Hibernate.org could comment on this reflection vs. code-generation part, "One Ovthafew" maybe.

    Can you please refer me to that discussion you pointed out.

    Thanx

    Yaser
  44. Yaser

    here is at least one discussion on that subject

    http://www.theserverside.com/home/thread.jsp?thread_id=12698

    best regards, marc
  45. Thomas

    Please accept my apologizes.
    In my mind small doesn't mean bad, but simple, light, easy to discover and deploy (small is beautiful).

    About the ODMG: there is no FUD in saying that ODMG is dead. This is simply a fact ! The proof is that you decided to start works on JDO. ODMG is coupled with ODBMS, and it is the reason why it doesn't succeed.
    I think this is the best way for the future, but I wouldn't say that your JDO implementation is already usable on a real project. When do you expect to be completed ?

    BTW: I know another O/R mapping product that has as many interfaces as you including ODMG (C++ and Java): ObjectDriver from InfObjects. ODMG was nice in the C++ world, because there were no alternate standard.


    About QoS and so on: I think that a commercial company like Thought or the former ObjectPeople (I've seen on that site that Oracle claims to welcome more than 90 people from ex-WebGain/TopLink), have dozens of people in their engineering and QA teams.
    They have their products used in production on very big projects, and they can develop a lot of features, because they have requirements from a lot of users, and because they get money from them. How many people are working full-time on OJB today ? Who is paying them ? And how many free contributors ?

    cheers, marc.
  46. <quote>
    About the ODMG: there is no FUD in saying that ODMG is dead. This is simply a fact ! The proof is that you decided to start works on JDO.
    </quote>
    I love your persistence Marc(pun intended). You claim a standard is dead based on the fact that OJB has decided to start work on JDO.
    Please give us at least some substance to indicate why ODMG is not good enough or why JDO is any better(technically, not the usual marketing speak of standards ,etc).
    Making vague statements about ODMG being suitable for C++ and ODBMS dosen't help.
  47. Ravi

    0. JDO is better just because it's alive. ODMG was nice indeed.
    1. do you know at least one guru still active in ODMG ?David Jordan and others are now completely involved in JDO (see JDOCentral if you don't believe me). And Francois Bancilhon (former O2 CEO and leading ODMG expert) has no more DBMS related activities.
    2. can you explain me why, while ODMG being an ODBMS "standard", has bever been completely implemented by any ODBMS vendor ? ObjectStore never implemented it, and Versant support was a laugh. Only Poet and O2 tried to implement some subparts.
    3. Anyway, OQL was a nice query language, but very difficult to implement efficiently on any known RDBMS.
    4. and what about ODL ? It was a pity to see the ODMG trying to run after the OMG. What was the interest of an interface language (IDL-like) to define implementations ??

    best regards, marc
  48. IMHO with ODMG it was not practically possible for the vendors to support the whole spec because of differing base technologies.

    But similarly to ODMG you must ask yourself whether SQL is a standard now? I believe it's management has ceased and just about every vendor implementation has proprietary extensions.

    One thing I can't fathom about this very long discussion, is why nobody wants to pay for anything? You all seem to like getting paid but are quite happy to kick the arse of all small ISV's.

    If you like the technology and it's 6K or 6million, pay the man!!
  49. "Why pay for anything if you find one of the comparable opensource project out there such as apache's OJB? OJB has all of the features I want." - Hai Hoang

    If you have executives who are willing to dive in and take the risk then go for it! If it meets all your product needs even better!

    There are a lot of excellent OpenSource projects that are very good but I am personally reluctant to use one that integrates such a difficult feature. I am quite sure someone will flame me for eating the "blue pill" and not the "red pill". (I like living in the Matrix.)

    Spitting out a bunch of strings is one thing... Mapping business objects to data is a whole different ball of wax to me. Maybe I need to change my perception of how to use OpenSource in this layer of an application. Right now I have to be honest and say it would take a lot of convincing for me to use an OpenSource O/R mapper today.

    Greg


  50. Gregory,

    You may want to try our OR-Mapping product, JDX. Currently in its third major release, JDX can reverse-engineer Java classes from an existing schema to give you a big jump-start on your current project. JDX supports complex object modeling including class-hierarchies. The patented technology of JDX is non-intrusive, flexible and easy-to-use. No need to pre-process or post-process your code; no static JDBC/SQL code generation. Sporting a small set of simple and consistent APIs, JDX accelerates your development process and reduces the maintenance costs. Check out a free eval version from our web site http://www.softwaretree.com

    Damodar Periwal
    Software Tree
    Simplify Data Integration
  51. Greg,

    http://www.firestarsoftware.com/products/index.shtml

    If you get a chance take a look at ObjectSpark. It can capture your schema, forward engineer your persistent object model, automatically generate your mappings, and then generate JAVA, .NET, or COM+ persistent components from the same model. The product also supports XML mapping, has powerful object based query tool, and a whole lot more.

    And yes, I am a developer at FireStar.

    Thanks for reading.
    Kevin
  52. Hi All,

    Just wanted to bring to your attention that we have had support for JMS based Transactional Distributed Caching for some time now. And this caching support is available for EJBs, JDOs as well as plain Java Objects (persistent of course). Our product basically comes with different types caching support:

    1. Process Level caching for simple java applications. This caching has features like state management, change notification etc.

    2. Distributed Caching. This is basically the process level caches in a distributed environment that are kept in sync through JMS.

    3. Clustered caching for load balancing.

    The cache is configurable in terms of the objects to be cached, the number of objects to be cached etc.

    Anyone interested can download both the versions:

          1. FrontierSuite for J2EE
          2. FrontierSuite for JDO

    from

    www.ObjectFrontier.com

    regards
    S Rajesh Babu
    ObjectFrontier Inc
    www.ObjectFrontier.com
  53. O/R mapping are nice if you are an object programmer.
    If you just need to access some raw data from tables, JDBC is OK.

    JDBC programming is also nice is you are a consultant and your business is to sell as many days as possible to your customers, explaining them, that you need to test all the variations of a complex WHERE clause in order to reach better performance.

    Many programmers tend to think that application performance mostly relies on fine SQL tuning.
    It's a little bit like thinking the speed of a car depends on its color (because Ferrari's are red ;-)

    Microsoft has no object persistence strategy, just because they never had any object strategy ;-)
    (remember the MFC architecture ;-))
    Despite the joke they announce ObjectSpaces, that is supposed to be a kind of JDO (but it is quite hard to find useful resources on that product). If Bill himslef says there is a need for O/R mapping frameworks, you should then be convinced ?

    cheerrs, mark
  54. Performance of Entity Beans[ Go to top ]

    The main difficulty we have had with 2.0 entity beans is > the fact that even a local interface call has to go

    > through a fairly heavy interception layer, and the
    > performance for using simple getters/setters for a large > set of beans was poor.

    The EJB 1.1 specification suggested that entity beans be modeled as coarse-grained objects. (See, for example, 4.1.2.) The EJB 2.0 specification changed that recommendation, to claim that the component model was flexible enough (with the addition of local interfaces) to represent a fine-grained persistent object that embodies the persistent state of a coarse-grained business object (see again 4.1.2, but in EJB 2.0). Certainly this is a natural approach using container-managed relationships (e.g. a customer has many orders; an order has many line-items; and customer, order, and line item are all entity beans).

    But Zohar's experience is a common one, judging from feedback I have received from my customers. A business object might have hundreds of related beans in one or more relationships. But the overhead of having calls to these entities travel through the stack of J2EE services was high--and frequently unneeded. If the entities are hidden from the client behind a session facade (a very typical pattern), that session bean will frequently contain all the declarative transactional and security information necessary. The one service required of the entity beans by many customers is *persistence*.

    I solved this problem in the current release of my product (the MVCSoft Persistence Manager) by adding a feature called "lightweight interfaces." These provide most of the semantics of entity EJBs without going through the EJB container's interceptor stack, so their overhead is roughly that of using plain Java objects (i.e. negligible). In some tests the performance improvement was an entire order of magnitude. (I'm sure other solutions to this problem are possible--my point is that fine-grained usage of entity beans is a valid approach.)
  55. Zohar,
     
    Why O/R mapping tools are better ?
    * as you said it is much more faster (IMHO this point is enough)
    * because coding is much less tedious
    * because it fully supports Java Object model
    * and because once your business model is coded into Entity Beans you'll need an heavy EJB server to run any application even a simple batch process.

    And if you like standards, JDO is the definite standard for persistence (while J2EE is a standard for everything and even more).

    EJB 2.0 is a standard, but who will really deploy EJB applications ? I think Sun is working on a new CMP mechanism based on their JDO layer. This is probably what is the future of EJB.

    cheers, mark
  56. Marc -

    <Marc>
    EJB 2.0 is a standard, but who will really deploy EJB applications?
    </Marc>

    Plenty of people. EJB != Persistence. You are right when you say that if you are just looking for persistence, then something besides EJB is a good idea. But EJBs offer much more than persistence, especially in a good app server implementation. In the right circumstances, EJBs are great (from experience). Again, and this point has been made in other threads, people kind of went crazy using EJBs early on. They are not for every project and are not a silver bullet but used well, they are quite useful. Especially EJB 2.X.

    Cheers
    Ray
  57. Ray

    You're right, speaking about EJBs in general is a non-sense.
    I definitely agree Session Beans provide a lot of interesting features (distribution, transactions...).
    Even Entity Beans can be helpful in some cases, but I cannot believe that someone will use them to access/persist fine-grained information.

    Just because too hard to code, to slow, no object model support...

    Cheers, mark
  58. Marc -

    Using entity beans effectively depends entirely on how you model the system - and coming from a CORBA background, I find entity beans quite easy to code and if designed properly, plenty fast. I'm not sure what you mean by no object model support in particular. Agreed, if someone is using entity beans strictly to access/persist fine grained information then they are barking mad. However, if you need concurrency, transactions, distribution, asynchronous messaging, naming, and security along with that persistence, entity beans do a fine job if designed and integrated properly.

    Cheers
    Ray