Versant Object Database version 8.0 released

Home

News: Versant Object Database version 8.0 released

  1. Versant Object Database version 8.0 released (37 messages)

    The Versant Object database is suited for applications with complex domain models and unusually high OLTP requirements under high concurrency. High performance OLTP is possible because without using serialization, an objects in memory representation is stored directly on disk without the need to break down the object into the fixed type system of the relational database. Plus, since the system is not as dependent as relational databases on an overlay indexing system, conflict with internal structures which often result in request queueing in an RDB can be avoided. The object database provides very high performance due to the fact that object relationships are part of the database storage model. In relational databases (RDB's), relationships are not actually stored, but instead are calculated at runtime using JOIN operations. Using Versant (ODB), performing operations like a self referencing JOIN over multi-million entry record sets will return results in sub second time. Using an RDB, these types of complex operations where millions of records are involved often take minutes to complete because the related information must be calculated. Using and object database, these operations are modeled as a collection reference in the application space and the collection contents are immediately available without a costly JOIN. The Versant object database also handles , via transparent object distribution in cloud deployment topologies. You can perform database queries and other CRUD operations in parallel across a physical distribution of nodes, without having to be explictly aware of the physical distribution in your business logic. Object references are valid throughout a dynamic cluster, allowing you to move objects between nodes without impacting the code of the application. you can imply connect and disconnect from physical nodes on the fly as you add more databases to a cluster. More. Message was edited by: staff

    Threaded Messages (37)

  2. App Server DB[ Go to top ]

    Most often for me the DB is not on the same box as my Java application code. I'm curious as to what control developers have when they query back an object graph (how much of the object graph gets pulled over the wire). For example, say your Java application is on an "app server" and that is remote from the "db server" (So a query will fetch back object graphs from the db and this occurs over a network). Is there some support for fetching back a partially populated java object? If I query back Order 1001 ... do I always get all the properties of that order? If I query back Order 1001 and also want the customer and also the related order details... can I define which related parts of the object graph should be "materialized" for my program on the app server (aka which bits get pulled over the wire)? For me, I'd be comparing these issues to how we do this with SQL/JDBC against RDBMS (using ORM). Thanks, Rob.
  3. Re: App Server DB[ Go to top ]

    Most often for me the DB is not on the same box as my Java application code.
    Yes, this is rarely the case for Versant, though it can run in-process too. Versant's other open source database, db4o is mostly used in those kind of scenarios. With Versant, logical databases are typically spread physically over many machines. They look and feel like one physical database, but they are not.
    I'm curious as to what control developers have when they query back an object graph (how much of the object graph gets pulled over the wire).
    There are various levels of control. You can simply return the references to the objects which satisfy the query or you can actually get the objects and then some configuratble level of referenced objects. If you are familiar with the ORM concepts of FetchGroups, then you will know the concept because its the same, the concept existed in our technology since back in the early 90's.
    Is there some support for fetching back a partially populated java object? If I query back Order 1001 ... do I always get all the properties of that order?
    Yes, you will always get all the properties of the order. If there are large objects you only access for specific use cases, then it is best to model those as a separate object rather than embedded. Typically, its the cost of setting up the RPC that is most expensive. Of course, there is also the cost ( time, memory ) of instantiating the object, even if you would only populate a few attributes.
    If I query back Order 1001 and also want the customer and also the related order details... can I define which related parts of the object graph should be "materialized" for my program on the app server (aka which bits get pulled over the wire) ?
    Yes, per the above, it is the ORM FetchGroup concept. Except of course, there is no JOIN calculation to get the related levels of referenced objects. They are part of the storage system.
    For me, I'd be comparing these issues to how we do this with SQL/JDBC against RDBMS (using ORM )
    It is very similar to using ORM at the API level. Just no mapping. The object life cycle and transaction sematics of ORM tools have been the basis of the Versant object database API since prior to the recent popularity of ORM tools. It's really great, because anyone who is an expert in ORM tools like Hibernate are essentially experts in the Versant object database ... just you don't need to bother with the mapping. Cheers, -Robert Use the right tool for the job. Versant Corp
  4. FetchGroups and Joins[ Go to top ]

    There are various levels of control. You can simply return the references to the objects which satisfy the query or you can actually get the objects and then some configuratble level of referenced objects.
    Can you point to or give code/query examples of this? (I really struggle to find any decent code examples of the query language VQL). Having control over what is fetched back "eagerly" vs fetched subsequently "lazily" controls the chattiness of the application (and generally has a significant performance effect) so it would be good to see examples of how this is controlled in VQL (well, I'm interested anyway). Hmmm, as I see it FetchGroups are for partially populating beans. Given Versant fetches all the properties I struggle to match that against FetchGroups but perhaps a few examples would help clarify things. Cheers, Rob.
  5. Re: FetchGroups and Joins[ Go to top ]

    Hi Rob, Concert last night, so little late getting started ;-)
    Can you point to or give code/query examples of this? (I really struggle to find any decent code examples of the query language VQL).
    This really depends on the language binding, but for VQL, which is our proprietary query API, it would look like the following: VQLQuery query = new VQLQuery( session, "select selfoid from model.Person order by ssn ASC" ); Enumeration ofResults = query.executeWithCursor( 0, -1, Constants.IRLOCK, Constants.RLOCK ); In the above: 0 is an int option parameter to do different things like, pin objects permanently into cache on fetch, flush the cache contents to participate in query scope, return result objects, etc. The -1 is a desired level of reference objects to fetch, -1 means fetch all related objects, you can set any level. The last two are lock controls on class and instances respectively to control if select for update, etc. Using JDO, the way to do this is to use FetchGroups per the standard. This is pretty straight forward, setup a fetch-group and then bind it to a query instance using query.setFetchGroup("named_path"); If object references are involved in the "named_path", then those objects are loaded across the wire on query execution. If using our.NET language binding and LINQ query, it is similar to JDO.
    Having control over what is fetched back "eagerly" vs fetched subsequently "lazily" controls the chattiness of the application (and generally has a significant performance effect)
    I completely agree, this is essential. Versant has further programmatic control of this eager fetching for performance optimization using two method calls on the database session ( connection, cache, tx, locking context ). They are groupRead( ) and getClosure( ) respectively. You can pass in a collection containing 1 or more object references and then the RPC call will fast cache the objects. The groupRead( ) is used for breadth loading ( single collection of objects loaded into cache in 1 RPC ). The getClosure( ) is used for depth loading, loading into the cache not just the objects in the collection, but referenced objects as well to a desired level of references. These are important because the object database is not so query focused in an ideal implementation. Instead query is often used just to enter a use case from a meaningful set of objects, but then you use algorithms to load the references objects ( we don't acutally execute a "query" as you touch references as is done in an ORM tool, because the relationships are baked into our storage system. For example, in a social networking application, you might want to find the shortest path between two people. You would query to get the starting/end points (people), but you would use a algorithm to determine shortest path and that algorithm would use objects at increasing levels of referencibility until a path(s) is discovered. In Versant, that algorithm would not use queries to get each new level of referencibility. You would simply use the groupRead/getClosure to fast load the next level of references and then process the match logic. This same approach is used in airline pathing, network pathing, supply chain pathing ... etc. Anywhere you have complex levels of chaining relationships ... no JOIN, just algorithms. Of course, you can still use our JDBC driver and hook up Crystal Reports or whatever and do indexed based queries for reporting. But the power of the database is exposed by using a robust model, not in optimizing queries. Hope this helps. Cheers, -Robert Use the right tool for the job. Versant Corp
  6. groupRead/getClosure[ Go to top ]

    Sorry ... lots of guesses here but: Are the groupRead/getClosure in-process on the Database? (you don't want them doing their thing over the wire due to performance right?) So from a Java app server... you can't control them? (you can use VQL and the API with the int fetch depth setting) That is, do you have to write some code to do the optimal query execution using groupRead/getClosure and deploy that code onto the Database? ... maybe I'm off base :)
  7. Re: groupRead/getClosure[ Go to top ]

    Sorry ... lots of guesses here
    Questions are good ... helps people learn and that includes me, I subscribe to the more brains are better than one theory ;-)
    Are the groupRead/getClosure in-process on the Database? (you don't want them doing their thing over the wire due to performance right?)
    Well, it is a combination. Each level is driven from the client, but the aggregation is done in the database for return to the client. Hopefully my other answer clarified this one. The best case would be if completely done in the server, but this only benefits the descrete object update case. Using the standard JDO and LINQ API's, you would use query FetchGroups. Using VQL, as a simplified case, lets say you wanted to get all Cities in a particular State and process some use case, but the State also had references to Residents and you did not want to load those objects. Query query = new Query(context.session,"select selfoid from domain.State where name = $nme" ); query.bind("nme","CA"); QueryResult results = query.execute(); ....where's the error handling code :-) State st = (State)results.next(); context.session.groupRead(st.getCities(), context.session.getDatabase(), true, Constants.RLOCK); ....now all the related cities have been loaded in 1 RPC. This eliminates any chatter that would have occurred if you simply started iterating on the results. //Now in this method, all objects are already cached so no RPCs. processUseCase ... Iterator ofCities = st.getCities.iterator(); So, from an implementation perspective, you usually create a 'loader' class which does use case based cache loading and consequently RPC optimizing. When you enter a use case, you pass your root objects to the loader and let it do something like the above. Then continue with business logic. Hope this helps. Cheers, -Robert Use the right tool for the job! Versant Corp
  8. Hmmm, JDO FetchGroups with int maxFetchDepth won't give you an optimal fetch - perhaps you should have a closer look at Ebean ORMs query language (but I'm biased :) ). "Nested Fetch Groups" might do though... but you haven't mentioned that (and they aren't part of any spec yet are they?).
    If you have groups of objects ...i.e. you query for orders which satisfy a particular date range ....e.g. today's orders, then you have an RPC for each level in the path.
    Hmmm... find order (*) join customer (name, status) join customer.shippingAddress (*) where orderDate = :today ... this returns multiple orders with their related customer information and with Ebean ORM -> RBDMS can be done in 1 query. I'd hope to be able to do that in 1 RPC to an OODBMS (but the OODBMS query features needs to support that). Where an ORM -> RBDMS requires more than 1 RPC generally is when the query incorporates more than one OneToMany relationship because that turns into a Cartesian product in the relational world. find customer join contacts /* OneToMany */ join orders /* OneToMany */ where name like :name The query could be done with 1 query to RDBMS but would result in a Cartesian product. With Ebean ORM -> RBDMS this results in 2 queries. This is where I think ORM -> OODBMS can get really interesting... Hmmmm, digesting...
  9. Hmmm, JDO FetchGroups with int maxFetchDepth won't give you an optimal fetch - perhaps you should have a closer look at Ebean ORMs query language (but I'm biased :) ). "Nested Fetch Groups" might do though... but you haven't mentioned that (and they aren't part of any spec yet are they?).
    You mean a FetchPlan where you fetch "n" fields of the candidate, and then "m" fields of a related object, and "p" fields of a related object of the related object ? Obviously that is part of JDO Fetch Groups spec.
  10. You mean a FetchPlan where you fetch "n" fields of the candidate, and then "m" fields of a related object, and "p" fields of a related object of the related object ?
    Yes, that's what I mean. Hmmm, I must be missing it. Is this in JDO 2.2 spec? Do you have a Page reference and/or an example? (I'm looking at jdo-2_2-mrel2-spec.pdf 12.7 Fetch Plan) Thanks, Rob.
  11. To perhaps be just a little bit clearer ... I mean in *ONE* query fetch part of order and part of customer and part of customer.shippingAddress etc. I read your reply ... "fetch "n" fields of the candidate, and then "m" fields of a related object, ... " again it sounded like multiple queries (as you navigate the object graph). So I'm asking if we can do this in 1 query... and hence get 1 network roundtrip to the DB to return just the properties we want. Google is not helping me out finding a JDO nested fetch group example :( Cheers, Rob.
  12. Is this in JDO 2.2 spec?
    All JDO2.x specs. You define a FetchPlan (for the PM or the query) by adding FetchGroups to the "plan". Your groups can be for class A, or class B, or class C or whatever, so the plan applies across all classes it includes groups for. The FetchPlan is thing that the query can use to determine what gets loaded.
  13. Ok, I understood that ... but my problem is related to "maxFetchDepth" and I assumed that it is being used? (maybe a bad assumption here). Perhaps it is as simple as ... "maxFetchDepth is ignored when there is an explicit paths/joins specified in the JDO query". ... or roughly in JPA speak ... "maxFetchDepth is ignored when a 'JPQL fetch join' is specified"
    .e.g. today's orders, then you have an RPC for each level in the path.
    So if you fetch A and join B (so they are both fetched in a single query) ... and fetch groups apply to both A and B (they are both partially populated based on the current fetch groups) ... and because we have an explicit join the "maxFetchDepth" is ignored ... and this results in multiple RPC calls to Versant (due to Roberts previous posts). If that's roughly what's happening ... then I think I'm getting the gist. Thanks for your patience... apologies for the confusion.
  14. but my problem is related to "maxFetchDepth" and I assumed that it is being used?
    The (max) depth of the fetch obviously :-) So if the fetch plan defines fetching down x levels of relations (from the candidate), then it stops at the max depth. Or with an example A -> B -> C -> D and A.b, B.c, C.d are all in the fetch plan and max fetch depth is 2 will fetch down 2 levels ... so A, and its B, and its C are fetched. Thats where it stops --- 2 relations
  15. and max fetch depth is 2 will fetch down 2 levels
    But it fetches down 2 levels on *ALL* paths (from A) right? ... you don't get to choose *WHICH* paths. Say from A there are 2 paths and you could go: A -> B -> C -> D.... and also A -> X -> Y -> Z Then both paths are fetched... A->B->C and A->X->Y Lets say I wanted just the A->B->C path and not any other paths from A (not the X path). e.g. Find orders (A) also fetch customer (B) and customer shipping address (C) but not fetch the order details (X).
  16. But it fetches down 2 levels on *ALL* paths (from A) right?
    No that's wrong, the FetchGroup specifies the name of the attributes so you are specifying exactly which path. You can name the FetchGroups. So, you can have multiple FetchGroups related to the same class ... presumably specifying different path expressions. Depending on the use case, you bind the appropriate named FetchGroup and it then does the right thing .... down your desired path. Cheers, -Robert Use the right tool for the job! Versant Corp
  17. Thought maybe a direct example would be best.... Below is a way to spec a FetchGroup which will load: Person->Address->Country->Code ( and NOT children ). Note that these are flattened when you incorporate one fetch-group inside another. So, where you see "default" inside the "detail" it is concatenating the definitions. The "default" fetch-group, just says load all non- reference type fields..i.e. int, long, String, etc. public class Person { private String name; private Address address; private Set children; } public class Address { private String street; private String city; private Country country; } public class Country { private String code; private String name; } ...now here is a plan "detail" for the above model. ... <!-- name + address + country code --> Then you just use "detail" in places like query, or getObjectById, by calling certain methods on your connection ( PM ) or query instance, etc and binding a plan ( a collection of these fetchgroups ) by name or modifying the active fetch plan on the fly. There are also ways to deal with things like collection content navigation and recursion ....i.e. what if you have a self referencing JOIN when do you stop?? I muse at....self referencing JOIN ... those don't exist in the object database ... just self referencing objects ;-) Now all of this said, I think it will depend from implementation to implementation whether or not things are done in (1) RPC - query. There is no spec on how the implementation materializes this information into the client cache (PM), just that it's done. So, it's quite possible 1 implementation may do it in 3 RPCs and others in 1 RPC. I think though that you will find, any case where you have a collection of roots ( not descrete objects ) the implementation will at most use 1 RPC per level of reference. After all, that's the whole point of FetchGroups is to optimize the RPCs and minimize the loading of reference types which are not relevant to a use case. Hope this helps... Cheers, -Robert Use the right tool for the job! Versant Corp
  18. Thanks for the example ... and maybe we are done :) It just seems that in this case max depth is either ignored or not used ... and the paths are specified by the field in the fetch group (for example): So yeah, its not clear to me how the conflict between max depth and the fetch group paths are resolved but perhaps we are close enough :) Many thanks for your efforts and explanations!! Cheers, Rob.
  19. Licence?[ Go to top ]

    You stated that DB4O is _another_ open source OODBMS. But in your website there is no evidence that V8 is open source. Actually, there is a 'trial' download. My question is: what is the licence, and, if a proprietary one, how much does it costs? thanks Alex
  20. Re: Licence?[ Go to top ]

    You stated that DB4O is _another_ open source OODBMS. But in your website there is no evidence that V8 is open source.
    It does read a bit strange. The db4o database is open source and Versant database is closed source.
    what is the licence, and, if a proprietary one, how much does it costs
    I posted an answer to this question a while back in another thread. Also worth noting, there is an "express" version being released in the next couple months with lower pricing for people not trying to build the next Sabre (don't need cloud type deployments), but still have a complex models and want to avoid mapping. http://www.theserverside.com/news/thread.tss?thread_id=58266#328366 Cheers, -Robert Use the right tool for the job. Versant Corp
  21. Re: Licence?[ Go to top ]

    The -1 is a desired level of reference objects to fetch, -1 means fetch all related objects, you can set any level.
    Hmmm, say I want to fetch down a certain path - say from order to customers to customer shipping address and *NOT* any other paths (like order to orderDetails etc). find order (*) join customer (name, status) join customer.shippingAddress (*) where id = 1007 Having an int fetch depth suggests you can't do that? Can you do this via some other bit of API? Cheers, Rob.
  22. Re: Licence?[ Go to top ]

    Hi Rob,
    Hmmm, say I want to fetch down a certain path - say from order to customers to customer shipping address and *NOT* any other paths (like order to orderDetails etc).

    find order (*)
    join customer (name, status)
    join customer.shippingAddress (*)
    where id = 1007

    Having an int fetch depth suggests you can't do that? Can you do this via some other bit of API?


    Cheers, Rob.
    If you are wanting to limit fan out on unwanted references, then you are correct. To fetch down a certain path, then you use either: 1) Still want to do it just with query, use the latest API, i.e. JDO or LINQ and use FetchGroups ( which does (2) for you under the covers ). 2) Using the proprietary API, simplify your VQL query to get the order(s) and use the groupRead method to aggregate references and run down your desired path. In the above cases, if you are talking about a single order, then there is no optimization, so there is chatter for each object reference and you might as well just process business logic. If you have groups of objects ...i.e. you query for orders which satisfy a particular date range ....e.g. today's orders, then you have an RPC for each level in the path. So, if there are 3 levels of references and 100 orders you are processing for today, then you do not do 100*3 = 300 RPC's, you do only 3 RPC's. Just as in the case with a single object. Either way, todays implementation drives the levels of closure ( pre-caching of levels of referenced objects ) from the client side. Server side closure is coming in our point release toward the end of the year. In most cases though, as you see from the above example, the next level of optimization only helps when dealing with descrete root objects and also assumes you are not trying to leverage caching. Cheers, -Robert Use the right tool for the job! Versant Corp
  23. Versant Licenses and T-Shirts[ Go to top ]

    We have flexible licensing for VAR/OEM situations where Versant is embedded within an application, appliance or device. Some of our largest VAR/OEM's are Avaya, Alcatel-Lucent, Borland/MicroFocus, Ericsson, IBM, NEC, Oracle, Samsung, and Siemens. Depending on the volume it can be a few hundred dollars per installation on up. For end users we provide workgroup level pricing and enterprise pricing with replication and high availability. Also, we do have a new t-shirt with No Mapping Just Objects. David Ingersoll VP, Sales Versant Corporation
  24. Versand & datagrid[ Go to top ]

    How can Versand Object Database be seen in the context of data grid? Distributed objects/data typically are found in products like Terracotta, Coherence and Gigaspaces and it appears the Versand shares features with these products. Peter Veentjer Software Transactional Memory for Java http://multiverse.codehaus.org
  25. Re: Versand & datagrid[ Go to top ]

    How can Versand Object Database be seen in the context of data grid?
    VOD has pretty advanced features for distributed operation such as: - Object migration: Objects can be migrated while applications still have transparent access to them. - Recovery: To ensure data integrity when multiple, distributed databases are used, VOD performs updates with two-phase commits. - Heterogeneity: Objects can be moved among heterogeneous platforms and managed in databases on numerous hardware platforms to take advantage of available resources in a network. - Schema management: Class definitions can be managed at run time on both local and remote databases. Class definitions are stored with objects, which allow access to objects with applications that are running on different platforms and are using multiple interface languages. - Expansion: Databases can be created, deleted, and expanded on local and remote platforms. Database volumes can span devices and platforms. - Backup: Data on one machine can be backed up to remote sites, tapes, or files. Multiple distributed databases can be backed up to save their state at a given point in time. This gives transactional consistency across multiple databases. - Security: Access to databases and system utilities for security, is controlled through user authorization, which may be customized. - Session database: VOD implements the concept of a session database, which can be local or remote, which handles basic record keeping and logging for a session. - Connection database: Applications can connect to any number of local or remote databases and then manage objects in them as if they were local. You can work on objects in any number of databases at the same time in a distributed transaction.
  26. Hi, I see at least 2 use cases where an object database, like Versant, could be accepted (that is, more mainstreamly accepted) : (1) use Versant as the primary production database, and synchronize it (a one-way relationship) with a relational database used for BI. (2) use Versant as a distributed data grid, maybe running into the same JVM space than the app server, and with a relational database at the back-end. The question is: is Versant supporting those models through pricing ? Particularly (2) ? Thanks for answer. Best regards, Dominique http://www.jroller.com/dmdevito
  27. I see at least 2 use cases where an object database, like Versant, could be accepted (that is, more mainstreamly accepted)
    Thanks for the "mainstream" caveat ;-) Few realize that in some way, we are even running these conversations, as all the traffic flows through switches from Alcatel-Lucent-Nortel, Ericsson, NEC, Siemens, Samsung, etc and those national network management suites are all built on Versant. This not to mention, people are using us every time they book an airline ticket or likely when doing an options trade in the stock market.
    1)use Versant as the primary production database, and synchronize it (a one-way relationship) with a relational database used for BI.
    Many of our customers are using us in this way. We can export XML and RDB's can consume it. We also support JCA and some are using us in transactions with RDBs. I think it is correct to say, people use Versant to build the systems which define their business, not necessarily for the applications which run their business. First, you need a reservations system which defines your company as a GDS provider ( i.e. Sabre on Versant ), then you need applications to manage all your employees and deal with human resource issues ( Oracle, Sybase, SQLServer, etc ).
    2) use Versant as a distributed data grid, maybe running into the same JVM space than the app server, and with a relational database at the back-end.
    We have a container plug-in for J2EE application servers. We don't provide any magic to get your objects into an RDB. You either need to enlist multiple resources ( ODB and RDB ) and manage yourself what goes to which storage -or- you do it asychronously via batch procesing per the above.
    The question is: is Versant supporting those models through pricing ? Particularly (2)
    I am curious how you would see this priced? On thing which our customer tend to benfit from is .. we don't charge for the client/cache. I've often seen Versant used in scenarios where there were 100 CPU's in the app server tier and only 8 CPU's on the database server. Well, with Versant you only pay for those 8 CPU's, we don't charge for the benefit you recieve by leveraging the caching. Cheers, -Robert Use the right tool for the job! Versant Corp
  28. What's new[ Go to top ]

    Congrats to Versant for that new version! BTW: Robert can you tell us what's new in that release? Eric Samson, Convertigo Mashup your business.
  29. Re: What's new[ Go to top ]

    Maybe Robert can add more details but in a nutshell: - Improved multi core scalability - Optimizations in internal memory management and caching implementations - Improved database administration tools (e.g., monitoring, dbcheck, dbreorg) - Enhancements to JDO including upgrading the Versant JDO SDK compatibility to the latest JDO 2.0 standard. - .NET binding with LINQ support - FTS for .NET and JDO based applications - "Black Box" recorder and analysis
  30. Re: What's new[ Go to top ]

    Enhancements to JDO including upgrading the Versant JDO SDK compatibility to the latest JDO 2.0 standard.
    Hola German! is that JDO2.3? or an earlier JDO2.x release? Saludos
  31. Re: What's new[ Go to top ]

    Hi Andy,
    is that JDO2.3? or an earlier JDO2.x release
    It is the 2.1 release - plus vendor extensions. Cheers, -Robert Use the right tool for the job! Versant Corp
  32. I still have my Versant 2.0 T-Shirt (circa 1995) hanging in my closet. Do I get a discount on V8 if I trade in my "classic" T-Shirt? :-) By the way, the graphic on the shirt depicts an image in a car's side mirror, with the slogan "Objects are Closer than they Appear." Sadly, the objects never caught up. Mark
  33. I still have my Versant 2.0 T-Shirt (circa 1995) hanging in my closet. Do I get a discount on V8 if I trade in my "classic" T-Shirt? :-)
    No need to trade it in, keeping it intact that long deserves a free copy! Seriously .... rgreene(AT)versant.com "Objects are Closer than they Appear." But ... they are. The ODB circa 1995 was just too early. People were interested in putting browsers on existing content. No focus on server side business issues, complex models and data grow challenges. Was even hard to find a good C++ programmer and Java was in it's infancy. Today, when I talk to developers is clear, they by in large know objects and OO Programming and that was just not true in the mid 90's. Though undoubtedly many helped them, I think Sun and Java are to thank for bringing the masses to the party. Cheers, -Robert Use the right tool for the job! Versant Corp
  34. Different Behavior[ Go to top ]

    Hi, When I first tried an OODBMS (that was DB4O at the time), I was a bit confused about it's behavior. E.g. most objects do actually have a "most important field"concept (PK equivalent). E.g. every app has user objects. Now suppose the "Key field"of such User Object is it's login. Person first = new Person("login1"); first.persist(); ... Person second = new Person("login1"); second.setLastName("Second"); second.persist(); ... I would then expect the ODB to UPDATE the login1 person. Instead, an ODB will create a 2nd person with the very same login. This is counterintuitive. (well, for me ;-) So, is there a way to have this work intuitive? E.g. by config, mark it as a 'PK' field so that updates will happen at such 2nd persist. The alternative resulted in quite ugly tree traversals to find out what needed to be updated and what not, depth on the fetch depth) -Wolf
  35. Re: Different Behavior[ Go to top ]

    Hi Wolf,
    most objects do actually have a "most important field"concept (PK equivalent).
    The ODB keeps the concept of PK orthogonal to the value of the data in an object. So, a unique id concept is there and you can get at this id if you like. It is the same concept as Datastore/Database identity in ORM tools.
    E.g. every app has user objects. Now suppose the "Key field"of such User Object is it's login.

    Person first = new Person("login1"); first.persist();
    ...
    Person second = new Person("login1"); second.setLastName("Second"); second.persist();
    ...
    I would then expect the ODB to UPDATE the login1 person. Instead, an ODB will create a 2nd person with the very same login.
    This is counterintuitive.
    I guess it's just getting used to the idea that identity is independent of the data values. Even data modeling books these days advocate such an approach. The issue is the "new" call. Any "new' generally results in a potentially new database object. Anyway, if you have the id ( e.g. "login1" ), then rather than "new"ing an object you would just point to it. in essence: Person p = session.getObjectById("login1"); p.setLastName("second"); Cheers, -Robert Use the right tool for the job! Versant Corp
  36. Re: Different Behavior[ Go to top ]

    I second Robert's reply. Just wanted to add that you can prevent the situation depicted in your example by using a unique field constraint in db4o. See: http://developer.db4o.com/Documentation/Reference/db4o-7.4/java/reference/html/reference/implementation_strategies/unique_constraints.html Best!
  37. You announce v 8.0, but don't say ANYTHING about what's new in this release. Impressively useless post.
  38. Hi ! From the press release: * Multiple enhancements for high performance computing on large multi-CPU/multi-core server * New Transparent Persistence support for Microsoft.NET platform offering JDO/JPA like API for C# and VB including support for LINQ * Updated Java/Java Data Objects (JDO) 2.x cheers Christian