Oracle Proposes Open Source Persistence Project at Eclipse

Discussions

News: Oracle Proposes Open Source Persistence Project at Eclipse

  1. At EclipseCon this week, Oracle announced that it has become an Eclipse Strategic Developer. The company also said it will donate its Java persistence framework, Oracle TopLink, to the open source community. Oracle is submitting a proposal for a new Eclipse project to deliver a complete persistence platform based on Oracle TopLink. Read the Press Release and the FAQ for more information. While TopLink has been best known for its object-relational support and work with the Java Persistence API (JPA), it has grown into much more in recent releases addressing Object-XML and EIS data access. The proposed Eclipse project will focus on several key persistence capabilities with a shared infrastructure: - Object-Relational Mapping (JPA) - Object-XML Mapping (JAXB) - Non-Relational Data Sources (EIS through JCA) - Service Data Objects (SDO) - Data Access Service (DAS) - Database Web Services (XML-Relational) Oracle will also work with the OSGi expert group members to create a set of blueprints that define how OSGi applications can access standardized persistence technologies. Also, Oracle will continue to devote its resources and welcomes others in the community to participate and ensure the platform meets the needs of all of its users.

    Threaded Messages (54)

  2. What we really need is one of the following: A persistence framework that runs in a layer between a standard language & platform neutral database connector and the database - that is not dependent on a dbms-specific proprietary stored procedure language. The connector to this layer should be available for any programming language - Java, .NET, COBOL, 'C', C++, Perl, Python, etc. If we cannot get that then how about a standard - non-dbms specific or dependent - mechanism for subscribing to and reporting events that occur within the data base such as inserts, updates, deletes, etc.? Currently the use of a persistence a framework's (toplink, hibernate, etc.) cache is unwise in a diverse technology environment. Updates to the database from other languages and technologies will bypass the framework because its language specific and thus render the cache "dirty" without its knowledge. It would be nice if developers using the framework could subscribe (via the mapping declaration) to events in the database that "dirty" their objects. The frameworks do not support this because the implementation is very dbms dependent. So, Oracle, quit farting around and playing at open standards and open source and work with your competitors to deliver something we really need.
  3. Instead of a non relational Persistence framework!
  4. Updates to the database from other languages and technologies will bypass the framework because its language specific and thus render the cache "dirty" without its knowledge.
    This issue is more of a technique problem and less of a technology problem.
  5. Updates to the database from other languages and technologies will bypass the framework because its language specific and thus render the cache "dirty" without its knowledge.

    This issue is more of a technique problem and less of a technology problem.
    If you use caching and have other applications updating the database via routes that bypass the persistence framework this problem occurs. The reason the other applications bypass the framework and its cache, other than those that pre-date its use, is the technology dependence of the framework's implementation. So, my view it is a technology problem. You don't have any good options for using Hibernate or Toplink with applications written in 'C', COBOL, C++, COM/DCOM, etc. Writing the triggers and alerts (to alert the cache manager) for each DBMS and language/runtime you have is more effort than is probably warranted. So what technique would you propose that preserves cache integrity in such an environment? (Please don't tell me to recode everything in Java.)
  6. Updates to the database from other languages and technologies will bypass the framework because its language specific and thus render the cache "dirty" without its knowledge.

    This issue is more of a technique problem and less of a technology problem.


    If you use caching and have other applications updating the database via routes that bypass the persistence framework this problem occurs. The reason the other applications bypass the framework and its cache, other than those that pre-date its use, is the technology dependence of the framework's implementation. So, my view it is a technology problem. You don't have any good options for using Hibernate or Toplink with applications written in 'C', COBOL, C++, COM/DCOM, etc. Writing the triggers and alerts (to alert the cache manager) for each DBMS and language/runtime you have is more effort than is probably warranted.

    So what technique would you propose that preserves cache integrity in such an environment? (Please don't tell me to recode everything in Java.)
    If you don't rewrite the legacy stuff, then you won't be able to use a common caching framework. I did say "more of" and "less". I was leaving room for legacy apps. In that instance, you are basically screwed. If you allow new "apps" to bypass the cache/domain/persistence layer, you are using the wrong technique.
  7. If you allow new "apps" to bypass the cache/domain/persistence layer, you are using the wrong technique
    Unfortunately the popular Java persistence layers do not support (many) other languages/platforms. Large enterprises typically do not confine themselves to one development language or run-time at any given point in time and languages come and go far more frequently than the DBMS of choice. For example, in the 20 years I've used Oracle, I've used at least 10 programming languages. (by platform I mean the run-time such as COM, .NET's CLR or Java's JVM)
  8. If you allow new "apps" to bypass the cache/domain/persistence layer, you are using the wrong technique


    Unfortunately the popular Java persistence layers do not support (many) other languages/platforms. Large enterprises typically do not confine themselves to one development language or run-time at any given point in time and languages come and go far more frequently than the DBMS of choice. For example, in the 20 years I've used Oracle, I've used at least 10 programming languages.

    (by platform I mean the run-time such as COM, .NET's CLR or Java's JVM)
    I've worked in many large organizations, so I know what you are saying and understand the direction you are coming from. I am not saying organizations should choose one language. I am saying the the db should be considered as part of the application, not as a separate system. If one doesn't then not only will caching be an issue but so will duplication of effort. Matt is right the only way to cache this was is at the db. Until it doesn't work anymore. But maybe getting people to develop this way is just my personal pipe dream. I doubt you will find a Java persistence layer that will ever support what you want. However there might be a cacheing tool - http://www.javalobby.org/java/forums/t83967.html This will require some coding changes however.
  9. I am saying the the db should be considered as part of the application, not as a separate system.
    Your desire is OK. But sometimes database wins over desired application architecture. It's just reality. Even single database instance may outlive many technology eras. When you think to move focus from data, think twice. Nebojsa
  10. I am saying the the db should be considered as part of the application, not as a separate system.


    Your desire is OK. But sometimes database wins over desired application architecture. It's just reality. Even single database instance may outlive many technology eras. When you think to move focus from data, think twice.

    Nebojsa
    Yeah. I know the reality. I am not talking about the way things are but the way they should be (not 100% but the exception vs the rule). And I am doing this not to preach that point but to show that this issue is a technique problem, not a technology one. I am not moving my focus from the "data". I am moving it from the database. People have a hard time separating the two. And while you might keep the same DB vendor, you usually don't keep the same version and the db structure usually changes. Not always.
  11. I am not moving my focus from the "data". I am moving it from the database. People have a hard time separating the two.
    There always will be the highest tier of complete data abstraction. And appropriate name for this tier is database (if not file system). Like you use at least two application to manage a single file (OS file manager and an application to open a file), it is naturally to use more than one application on the top of a database (at least backup/restore utility and your application). If you try to make a complete data abstraction with a Java persistence layer, then you are moving toward object database. Shared data, managed by services like file systems and databases should exist in overall architecture of IT resources. Nebojsa
  12. If you try to make a complete data abstraction with a Java persistence layer, then you are moving toward object database.
    I really don't think this is the case, and I think there is good evidence to back up my view. Even JDO 1.x, which was a far more abstracted specification than JDO 2.x, has been primarily used for relational systems, and very effectively so. Transparent persistence, as in JDO, Hibernate, JPA etc, does not, of itself, specify the type of storage - it is no more targetted at object databases than any other kind of persistence. I don't think the term 'abstraction' is right here anyway. What is happening is more appropriately considered to be a separation of concerns: the ability to isolate persistence code from business logic. Anyway, I think it is simplistic to say that the appropriate place for the highest data abstraction should be the database. Data is often far more mobile than that. It is imported and exported in various formats. It moves between applications. I like JDO because it gives me the flexibility to represent all these forms of data using the same object model, and to handle them with the same API. Increasingly, implementations of JPA will allow that too. I agree with Mark - people have a hard time separating "data" from "database".
  13. I am saying the the db should be considered as part of the application, not as a separate system.
    Your desire is OK. But sometimes database wins over desired application architecture. It's just reality. Even single database instance may outlive many technology eras. When you think to move focus from data, think twice. Nebojsa
  14. So, Oracle, quit farting around and playing at open standards and open source and work with your competitors to deliver something we really need.
    +1 If Oracle "cared" about open standards, and open source, and persisting to other datastores they would have embraced JDO. We'll leave the rest unsaid ...
  15. So, Oracle, quit farting around and playing at open standards and open source and work with your competitors to deliver something we really need.

    +1

    If Oracle "cared" about open standards, and open source, and persisting to other datastores they would have embraced JDO. We'll leave the rest unsaid ...
    JDO doesn't solve the problem Will bought up any more then Toplink or Hibernate do. JDO vendors that have implemented cache stores Kudo, Castor, etc.. suffer from the same db-cache sync issues.
  16. ORACLE was wrong when they thought JDO was the ennemy. ORACLE finally fought against HIBERNATE, and was defeated. This is just the capitulation.
  17. ORACLE was wrong when they thought JDO was the ennemy.
    ORACLE finally fought against HIBERNATE, and was defeated.
    This is just the capitulation.
    Oracle and JBoss have been working closely together co-leading the EJB3 specs. There are no fights or enemies, don't create sometime out of nothing. Little history lesson for you. Toplink has been around for a long time. It was one of the first object->persistance tools. For a time it supported the JDO api (pre 1.0), and even to this day contains legacy support for the parts of the JDO api that it once actively support. The fact that toplink is used as one of the persistance implementation for EJB is only a natural evolution of the product, not a capitulation. I used both hibernate and toplink at my company and find both to be great products.
  18. I used both hibernate and toplink at my company and find both to be great products.
    Can you please do a small comparison?
  19. I used both hibernate and toplink at my company and find both to be great products.


    Can you please do a small comparison?
    *Before I say anything, let me note that besides a few ejb3 apps that we are doing proof of concept work on, we are not using the newest version of Hibernate or Toplink. We use both products differently, so I don't think I'm in a position to do a heads up comparison. We have two main sets applications: one is a set of spring/hibernate apps, the other is a set of struts/toplink(wrapped in ejb session bean) apps. I do find differences between the products when using their underlying API's, and differences in how persistance objects are defined. For example I find the underlying Toplink API to be more comprehensive. We don't have control over our database schema, and honestly the schema is a disaster, there are some cases where I appreciate the extra flexibility of Toplink API. On the other hand the Hibernate API feels more streamline and less verbose, which is refreshing. Hibernate adopted ejb3 style annotations way before Toplink. I like keeping the persistance configuration information in code so this was a big plus for me. You can also define persistance objects in code with Toplink, but again it is on the verbos side. Only other note is that Toplinks built in connection pool services seem to be a little more robust. Many of our apps use connection pools managed by the app server, so of course in those cases it doesn't matter. We have been building smallish ejb3 proof-of-concept applications and have found in ejb3 mode using hibernate or toplink for the persistance implementation produces equal results. In all of our tests code written for one works on the other without fail, and performance in our environment is also roughly equal. We are very happy with what we are seeing.
  20. So, Oracle, quit farting around and playing at open standards and open source and work with your competitors to deliver something we really need.

    +1

    If Oracle "cared" about open standards, and open source, and persisting to other datastores they would have embraced JDO. We'll leave the rest unsaid ...


    JDO doesn't solve the problem Will bought up any more then Toplink or Hibernate do. JDO vendors that have implemented cache stores Kudo, Castor, etc.. suffer from the same db-cache sync issues.
    First Castor is not a JDO implementation, even if it shared the same name. But, more important, what kind of cache are you talking about ? L2 cache ? If is this the case, then this is not (only) a persistence technology issue. This problem pops-up whenever you manage replicas of some entity and some SW component does not use the common layer. Regarding JDO in particular, the standards names cache (not a very successful term IMHO) the in-memory table of object loaded during the lifetime of the PersistenceManager and they are discarded when the PersistenceManager is closed (JDO allows finer control anyway, but for brevity...). I don't see any problem in using such a persistence framework in an heterogeneous environment. L2 cache, instead, contains useful data to resume persistent objects without accessing the underlying datastore, and is not bound to the lifecycle of the PersistenceManager. About a "standard" mechanism to notify changes. Well, facing with someone issuing a wonderful UPDATE xx where zzz would be a challenge anyway. I already see Oracle, IBM, MS making a great committee aimed to provide such a standard. I think webservice lesson has still to be learned.... Guido.
  21. What we really need is one of the following:

    A persistence framework that runs in a layer between a standard language & platform neutral database connector and the database - that is not dependent on a dbms-specific proprietary stored procedure language.
    Hardly. The problem you brought up most certainly exists, but you will not be able to solve it by using a "platform neutral database connector". Too many companies rely on some amount of stored procedures and/or triggers. The only *true*, never fail solution is to not use any application layer caching for any data that might change. In this case database clustering is your only option to scale. Otherwise you are left with understanding your data, the applications accessing at that data, and only cache data in the application layer when it is appropriate or necessary. In this case I think that Toplink/Hibernate provides a reasonable solution to help accomplish this. Most all database can be used in a way that allow them to be "not dependent on a dbms-specific proprietary stored procedure language". If this is your goal then don't use non standard features. Many of these non standard features exists because, depending on your requirements, they fill a need. PL/SQL and Triggers are not a bad thing in themselves.
  22. What we really need is one of the following:

    A persistence framework that runs in a layer between a standard language & platform neutral database connector and the database - that is not dependent on a dbms-specific proprietary stored procedure language.


    Hardly.

    The problem you brought up most certainly exists, but you will not be able to solve it by using a "platform neutral database connector".
    Really, and why not”? There are products that let you create intermediation layers accessed via common database connectors (ODBC and JDBC). From there you can write code in a number of languages that interfaces with the dbms or other store and implements O-R mapping, type validation & enforcement, etc. It’s your choice as to how much is in the dbms’ stored procedures versus your programming language of choice. Interestingly, DB2’s stored procedure approach is very much along these lines in terms of supporting common programming languages instead of inventing one and running on top of instead of inside the database, but they are still product specific.
    Too many companies rely on some amount of stored procedures and/or triggers.

    The only *true*, never fail solution is to not use any application layer caching for any data that might change.
    And why is that the only *true* solution? There is a good reason to want an object cache in the application tier – the cost of O-R conversions. As to scaling, the better caches distribute transparently so you can share your object cache across multiple machines and servers.
    In this case database clustering is your only option to scale.
    No, it isn’t my only option, but I want better options.
    Otherwise you are left with understanding your data, the applications accessing at that data, and only cache data in the application layer when it is appropriate or necessary. In this case I think that Toplink/Hibernate provides a reasonable solution to help accomplish this.
    Huh? That doesn’t solve the problem. A data abstraction layer should be language and platform neutral so that one implementation can be reused now and over time. It should also be dbms neutral (not your code, just the layer and its framework) so that you can have a more uniform approach for all your dbms products. Let’s limit the differences to the functional capabilities of the different products (what you code) and how to leverage them.

    Most all database can be used in a way that allow them to be "not dependent on a dbms-specific proprietary stored procedure language". If this is your goal then don't use non standard features. Many of these non standard features exists because, depending on your requirements, they fill a need. PL/SQL and Triggers are not a bad thing in themselves.
    I use PL/SQL and triggers to do these things when I use Oracle. When I use DB2 or SQL Server, the language changes. It is true that the logic I code for this in whatever language I use will have dependencies both on the physical design and the dbms product itself (in part because physical design must consider the capabilities of the product you’re using), but using completely different languages for each dbms you have causes implementation and maintenance problems and certainly limits your opportunities for reuse. Your implementations of even identical logic will look very different from each other. It can be difficult to determine whether or when they are doing the same things. All that is beside the point. My data abstraction layer – O/R, type enforcement, caching, etc. should not limited to supporting one or two languages/platforms and the dbms community could help make that happen.
  23. All that is beside the point. My data abstraction layer – O/R, type enforcement, caching, etc. should not limited to supporting one or two languages/platforms and the dbms community could help make that happen.
    In an ideal world I for the most part agree with what you are saying. I guess the heart of our disagreement is on the fact that I don't think it will become a reality (at least not any time soon). Oracle can not dump pl/sql, there is just too much of it out there, too many companies relying on it. Plus database clustering solves most of the caching problems you are brining up, and personally I feel it solves them in a more sound and proven way. My biggest beef with database clustering is that it is still to expensive. This will change as the open source vendors implement better clustering into their databases. As far as total database abstraction, yeah that would be great...really it would, but once again I don't think it is realistic. There is nothing in it for database vendors to build a complete database abstraction layer, so I doubt they ever will.
  24. All that is beside the point. My data abstraction layer – O/R, type enforcement, caching, etc. should not limited to supporting one or two languages/platforms and the dbms community could help make that happen.


    In an ideal world I for the most part agree with what you are saying. I guess the heart of our disagreement is on the fact that I don't think it will become a reality (at least not any time soon).

    Oracle can not dump pl/sql, there is just too much of it out there, too many companies relying on it. Plus database clustering solves most of the caching problems you are brining up, and personally I feel it solves them in a more sound and proven way. My biggest beef with database clustering is that it is still to expensive. This will change as the open source vendors implement better clustering into their databases.

    As far as total database abstraction, yeah that would be great...really it would, but once again I don't think it is realistic. There is nothing in it for database vendors to build a complete database abstraction layer, so I doubt they ever will.
    Thank you very much for your comments. Can you tell the versions of Hibernate and Toplink you are using as well as Application Server name and version? Thanks in advance.
  25. Thank you very much for your comments.
    Can you tell the versions of Hibernate and Toplink you are using as well as Application Server name and version?

    Thanks in advance.
    Hibernate 3.1.x running in Tomcat 5.5 Toplink 9.0.4.x running in Oracle IAS 10.1.2 Enterprise Edition
  26. The Many Lives of TopLink[ Go to top ]

    This is cool - TopLink takes on yet another life. Let's see, first Smalltalk, then to Java, then BEA buys it, then sells it to Oracle, then the RI, and now EclipseLink. I remember when EJB2 was rolling out and someone from BEA telling me that people at BEA referred to their EJB2 implementation (with container managed relations, etc) as the "TopLink Killer". I thought that was pretty silly then, but I guess they truly believed it enough to sell it. Then to add some irony to that TopLink was the RI for JPA/EJB3 which is sort of a killing of the EJB2 that was suppose to kill TopLink. May Toplink/EclipseLink live on (and good riddance EJB2).
  27. Re: The Many Lives of TopLink[ Go to top ]

    Steve, Just to clarify one detail. BEA never owned TopLink. Between The Object People and Oracle, TopLink was owned by WebGain. While the company colours and VC funding were similar to BEA they were separate companies. Cheers, Doug
  28. Re: The Many Lives of TopLink[ Go to top ]

    Oh yeah, WebGain. I have blurred my history. It's seems like such a long time ago now - before the dot-com crash. Well, WebGain and Visual Cafe have dissappeared while TopLink lives on.
  29. Oh yeah, WebGain. I have blurred my history. It's seems like such a long time ago now - before the dot-com crash. Well, WebGain and Visual Cafe have dissappeared while TopLink lives on.
    Just to clarify things even more. I believe that, in previous post, VC means 'Venture Capital', not Visual Cafe ;))). cheers Artur
  30. Back to abstraction[ Go to top ]

    My understanding was that people are happy with JPA because it targets a more concrete domain (RDBMS) compared to entity beans (prior to EJB3) and JDO that try to address the same issue at a higher level of abstraction. IMO, the reason was that basically at that level of abstraction almost it's impossible to provide an easy to use solution. Now we are back to the same discussion. We are talking about a new persistence project that is going to cover a wide range of data sources including non-relational ones! Am I missing something? Thanks,
  31. Re: Back to abstraction[ Go to top ]

    My understanding was that people are happy with JPA because it targets a more concrete domain (RDBMS) compared to entity beans (prior to EJB3) and JDO that try to address the same issue at a higher level of abstraction. IMO, the reason was that basically at that level of abstraction almost it's impossible to provide an easy to use solution.
    I don't understand this argument. The JPA and JDO APIs are now pretty similar, and when you use them you are working at the same level of abstraction (transparent persistence using a portable query language). Both JPA and JDO have very rich relational mappings. They are both pretty easy to use as well.
  32. Re: Back to abstraction[ Go to top ]

    My understanding was that people are happy with JPA because it targets a more concrete domain (RDBMS) compared to entity beans (prior to EJB3) and JDO that try to address the same issue at a higher level of abstraction. IMO, the reason was that basically at that level of abstraction almost it's impossible to provide an easy to use solution.


    I don't understand this argument. The JPA and JDO APIs are now pretty similar, and when you use them you are working at the same level of abstraction (transparent persistence using a portable query language). Both JPA and JDO have very rich relational mappings. They are both pretty easy to use as well.
    Not at all. JDO supports much more O/R mappings than JPA. ...and the myth turns out to be just imagination... The below link has a matrix for the most common relationships. http://www.jpox.org/docs/orm_relationships.html
  33. Re: Back to abstraction[ Go to top ]

    Not at all. JDO supports much more O/R mappings than JPA. ...and the myth turns out to be just imagination...
    Now that is quite amusing. JDO having more O/R mappings that the 'targetted at RDBMS' JPA!
  34. Re: Back to abstraction[ Go to top ]

    JDO supports much more O/R mappings than JPA. ...and the myth turns out to be just imagination...

    The below link has a matrix for the most common relationships....
    Who cares of the number of exotic mappings/relationships! The ORM which is RDBMS oriented will do at least the following: 1. Subsueries : select * from ... where X in (select max(..) from .. where.. group by .. ) 2. bulk update/delete 3. cursors, flexible locking ("select for update") and other RDBMS specific things. Not being able to use all the power of RDBMS is quite silly, and with JDO all the features mentioned above may only exist as vendor-specific extensions, thus creating vendors lock-ins and killing the idea of "one specification - many implementations".
  35. Re: Back to abstraction[ Go to top ]

    Who cares of the number of exotic mappings/relationships!
    Dont really think I'd call a basic indexed list an "exotic" (in fact I cant remember the last project I worked on that didnt need one), or a Collection of Strings (who would want that?), or a Map using join table, or a basic identifying relation ...
    1. Subsueries : select * from ... where X in (select max(..) from .. where.. group by .. )
    Found in JDO2.1
    2. bulk update/delete
    Bulk delete is in JDO2 actually.
    3. cursors, flexible locking ("select for update") and other RDBMS specific things.
    The spec you're referring to (JPA1) doesnt actually specify such things explicitly, not down to that level of detail. They are found in implementations but then we get on to vendor lock in issues dont we?
    and with JDO all the features mentioned above may only exist as vendor-specific extensions
    I've just answered that I think ... FUD. As always people are advised to assess the requirements of their project with respect to features required and what specifications offer. JPA is suitable for many, but then JDO is suitable for many more too. We put together that figure to promote debate on the relationships required by typical projects and to push the ORM specification process further (whatever name they give to the specification). Some more (non-FUD based) comparison http://www.jpox.org/docs/persistence_technology.html
  36. Re: Back to abstraction[ Go to top ]

    I'm not refering to JPA, I'm talking an RDBMS oriented ORM solution vs an abstract one. Let me see: 1. Subqueries Databases has been supported it for over 10 years and it appears only in the latest JDO specification. well, better later then never. we just need to wait few months more for JDO2.1 compatible implementations... no problem, we've been already waiting for years ;)) 2. bulk update 3. cursors, flexible locking ("select for update") and other RDBMS specific things. All of that is still a valid example of abtraction vs rdbms orientation. That's all. I'm just trying to prove that too much of abstraction is silly, because you loose the flexibility.
  37. Re: Back to abstraction[ Go to top ]

    2. bulk update
    I am very skeptical about bulk update use with an ORM. It is a potentially dangerous operation.
  38. Re: Back to abstraction[ Go to top ]

    2. bulk update


    I am very skeptical about bulk update use with an ORM. It is a potentially dangerous operation.
    +1 Good lord yes! And what would be the point? And would it still be a bulk update? By bulk, I mean many thousands, tens or hundreds of thousands or more.
  39. Re: Back to abstraction[ Go to top ]

    come on, everything is a "potentially dangerous operation", the life itself is "potentially dangerous" ;) bulk update is not more dangerous then bulk delete, right? ok for you, the keyword is "performance optimisation". If during the performance optimisation you need to jump to the native SQL, it's ugly. It brakes all your "exotic ;)" mapping efforts, you come back to fields and tables. you cannot avoid the performance optimisation. bulk updates are _sometimes_ needed. sometimes, you might never need it, someone alse might do. If the ORM is too far from the RDBMS you will surely go into the native SQL to optimise. BTW, Do you consider cursors and "select for update" potentially dangerous as well? I dreem to see an ORM based application that treats great volumes and uses RDBMS without native SQL inside. Wouldn't it be nice to be able to implement any native DB feature with JDOQL, HQL, JPAQL, EJQL, OracleQL, whatever ;)
  40. Re: Back to abstraction[ Go to top ]

    bulk update is not more dangerous then bulk delete, right?
    Yes, it is. The reason is to do with object cache management.
  41. Re: Back to abstraction[ Go to top ]

    http://jyog.com/forum/
  42. Re: Back to abstraction[ Go to top ]

    JDO supports much more O/R mappings than JPA. ...and the myth turns out to be just imagination...

    The below link has a matrix for the most common relationships....

    Who cares of the number of exotic mappings/relationships!
    In JDO, a List type maintains the order where the elements are added. People call this exotic!
  43. Re: Back to abstraction[ Go to top ]

    JDO supports much more O/R mappings than JPA. ...and the myth turns out to be just imagination...

    The below link has a matrix for the most common relationships....

    Who cares of the number of exotic mappings/relationships!


    In JDO, a List type maintains the order where the elements are added. People call this exotic!
    In JDO you can have a List (or Map) of Strings (or Dates, or Integers..) as a persistent field of an object. Equally exotic, I guess.
  44. Dumb Question[ Go to top ]

    Is this just the tooling or it is actually donating Toplink (presumably "Express") to Eclipse? While I think seeing Toplink open sourced is a good thing... Why at Eclipse?? I guess I think of IDEs and SWT and UI stuff when I think of Eclipse not O-R mapping tools.
  45. Re: Dumb Question[ Go to top ]

    We are donating the full functionality of TopLink. This includes all of the JPA implementation plus all of our advanced ('exotic' :) object-relational mappings, object-XML mappings and much more.
    Why at Eclipse?
    Eclipse is much more than just an IDE. Eclipse is a platform that already includes many different technologies such as Equinox (OSGi), Rich Client Platform, and soon a complete Java Persistence Platform. Doug
  46. Re: Dumb Question[ Go to top ]

    We are donating the full functionality of TopLink. This includes all of the JPA implementation plus all of our advanced ('exotic' :) object-relational mappings, object-XML mappings and much more.

    > Why at Eclipse?

    Eclipse is much more than just an IDE. Eclipse is a platform that already includes many different technologies such as Equinox (OSGi), Rich Client Platform, and soon a complete Java Persistence Platform.

    Doug
    Doug, Wasn't there another JDO vendor who did this/tried this almost 2 years ago? http://www.theserverside.com/news/thread.tss?thread_id=33391
  47. What do we need now!![ Go to top ]

    As someone said before, we don't need another persistence framework or a war between the existing persistence frameworks. Most of the existing persistence frameworks works great for the given platform against any database. What do we need now is: A persitence framework work, once build for an application should be easy enough to switch from one language to another (language independent). May be it's already there, but I haven't come across one!.
  48. Oracle is giving Toplink to the general public because it lost the battle with Hibernate :)
  49. Oracle is giving Toplink to the general public because it lost the battle with Hibernate :)
    I've got to say you win the challenge for intellect on this news item. Must have taken you a long time to come up with something so original and deep. With such an attitude you clearly arent interested in the technical advantages of other ORMs over Hibernate, so we wont bother you further.
  50. I keep on using my good old iBatis. There has never been a better choice. It is quick to learn, allows you to use 100% of your DB potential and has very runtime little overhead. I have used Hibernate and JPA and they both suck, especially when it comes about bottom-up design. I used Hibernate in a 250+ tables model and it takes *ages* to make the mapping working correctly. About scalability, I am now involved with high-load transactional systems (EFT platform) and there is no way an O/R mapping could help.
  51. About scalability, I am now involved with high-load transactional systems (EFT platform) and there is no way an O/R mapping could help.
    Some of the highest load transactional systems (such as EBay) use O/R mapping.
  52. Since you know the internals, could you please elaborate? Is it Java? Is it hibernate? Thanks.
  53. Some of the highest load transactional systems (such as EBay) use O/R mapping.
    There are many cases where the example you use (EBay) does not apply. To what extend do EBay's transactions involve contention for the same rows across multiple tables? Can EBay get by with first-come, first-served updates (no optimistic locking rules to implement or pessimistic locking)? Whenever people use EBay, or worse, Google, as examples of highly scalable systems and attempt to apply their lessons to internal business applications I think they are making a mistake. In Google's case they are supporting large numbers of readers of replicate data and misses are acceptable. EBay has true transactions in the sense that there are updates and contention (bidders), but do they really care about the users that have read the data and are in the process of editing (bidding)? Do they have a lot of internal consistency rules to enforce? I doubt it. I think that first to post wins in their case. My guess is that they tend to compartmentalize the bidding activity so it can be partitioned and horizontally scaled. That isn’t always possible with internal systems. High-volume OLTP in many internal mission-critical systems can have more difficult challenges. Locking mechanisms are employed to enforced business rules and prevent corruption -the kind of corruption associated with inconsistent state. By that I mean that the data you are only reading sometimes has to be locked (pessimistically or optimistically) because they govern the legitimacy of the updates and inserts you make elsewhere. Your rules may require a number of other (volatile) items across the data base in order to determine which values are valid for the row and column you are updating. Insurance has a lot of rules like that. None of this means you cannot or should not do O/R mapping, but as volume increases and your response time requirements remain sub-second, you are forced to begin looking for places to cut and O/R mapping is one place where the overhead can be high. Hence, the reason so many of the O/R frameworks include caching even though the DBMS has its own (non-object) cache.
  54. Fair comment. What I was objecting to was a general unqualified statement saying that O/R mapping can't help or be used in such situations. As you, say, it depends. There are many places where the EBay comparison does apply. I specifically mentioned EBay (as against Google and others) because it does involve high volume transactions, of at least some kind.
    None of this means you cannot or should not do O/R mapping..
    That was my point, in contradiction to the original post.
  55. ...but as volume increases and your response time requirements remain sub-second, you are forced to begin looking for places to cut and O/R mapping is one place where the overhead can be high. Hence, the reason so many of the O/R frameworks include caching even though the DBMS has its own (non-object) cache.
    It would be better to clarify which cache you are talking about. ORM framework must have a unique in-memory object, in a given transaction, corresponding to a state persisted with a certain oid in the data store. So it is more a matter of consistency rather than performance. Obviously this allows you to avoid repeated retrieval of the same state during the same transaction. And even if the DB cache would help avoid disk access don't forget the network latency and transfer. Level 2 cache, instead, is used to store data useful to resume persistent instances without accessing the DB (i.e. increase performances). Guido