Thought Inc., maker of the Cocobase object to relational mapper for EJB application servers has announced a significant advances in transparent object persistence features. The idea is that you can work with plain java classes (not entity beans) and they could be persisted to a database without any extra code (similar to the new JDO spec).
THOUGHT Inc., the market and technological leader in Object to Relational Mapping OPTIMIZED for EJB, today announced exciting advances in Transparent Object Persistence features in CocoBase Enterprise O/R for use with EJB and Java Classes. Persisting complex data object graphs in both a local and distributed environment is greatly simplified to a few lines of code. This is accomplished dynamically thus not polluting the object graph allowing for optimal continued reuse. Ward Mullins CTO, of Thought Inc. summed up the company's philosophy in a news conference today in this way, "To the Object Model do no harm."
It is important to compare and contrast this development with existing systems such as ODMG and JDO. These specifications say that the APIs are open, however, in order to achieve a practical implementation the result becomes proprietary and vendor specific. This means that either the class must be rewritten by hand or rewritten by some tool to be compatible with their systems. The increased complexity and developer time to create and maintain such a large code base can quickly become unreasonable.
CocoBase's architecture by contrast doesn't require CocoBase code to be inserted into the objects or the object graph. Therefore reprocessing of source code or byte code in order to persist the objects does not occur. Instead, the CocoBase product can persist an entire graph of objects created elsewhere with no changes to either the object source or binary code.
Both ODMG & JDO require changes to the source or binary code of the objects. These approaches are both unnecessary and often increase error-prone code to an unacceptable level. Many consider the designs of ODMG and JDO to be counter to Java's open access - cross platform - component reuse design intent. While CocoBase's architecture makes java classes vendor neutral, the other approaches deeply tie the source or binary code to only one vendor. Their system creates a java class hierarchy that once processed and integrated by one vendor becomes completely incompatible with another.
This latest generation of CocoBase's technology can identify 'copies' of objects and parts of object graphs, and manage them in both a local and distributed environment. Entire graphs of objects can be copied to some remote machine, manipulated and then processed in some form to be returned synchronized with the original and the copy. CocoBase, using this facility, can manage virtually any serializable object graph.
About CocoBase Enterprise O/R
CocoBase Enterprise O/R provides EJB developers templates for Container and Bean Managed Persistence that are optimized for scaling and performance with full support for EJB 1.0, 1.1 and the anticipated 2.0. The generated CMP, BMP, JSP code, like most of CocoBase Enterprise O/R, can be extended or changed based on the customers unique requirements without having to involve costly consulting. This fine grained Object to Relational mapping and dynamic code generation provides the developer consistent, high-end database access across any standard EJB Server with the ability to manage complex relationships between Entity beans and Non-Entity beans, and access to any standard Relational database.
About Thought Inc.
Established in 1993, THOUGHT Inc. is the market leader in Java-based object-oriented Mapping Middleware technology. THOUGHT Inc. has been shipping CocoBase, the patented flagship product and the industry's most advanced framework for accessing legacy data in a distributed application since March of 1997. CocoBase is a standards based product relying on such technologies as RMI, CORBA, JDBC, EJB, etc. Thought Inc. makes its home in San Francisco supporting customers worldwide in the United States, Europe, Japan, Asia and Australia. CocoBase and THOUGHT Inc. are registered trademarks of THOUGHT Inc. CocoBase technology is based on US patent #5857197. More information on THOUGHT Inc. can be found online at www.thoughtinc.com.
In their press release, THOUGHT has made some comments on JDO that just aren't true.
THOUGHT sez: "It is important to compare and contrast this development with existing systems such as ODMG and JDO. These specifications say that the APIs are open, however, in order to achieve a practical implementation the result becomes proprietary and vendor specific. This means that either the class must be rewritten by hand or rewritten by some tool to be compatible with their systems. The increased complexity and developer time to create and maintain such a large code base can quickly become unreasonable...While CocoBase's architecture makes java classes vendor neutral, the other approaches deeply tie the source or binary code to only one vendor. Their system creates a java class hierarchy that once processed and integrated by one vendor becomes completely incompatible with another."
RESPONSE: This betrays ignorance of the JDO specification. JDO requires that the persistence-capable classes be *binary compatible* with all persistence-capable classes from all JDO vendors, including the freely available binary distribution of the JDO Reference Implementation. Part of the test suite for JDO compliance is verifying that the JDO implementation operates correctly with the Reference Enhancement of application classes.
The reason JDO inserts special code into the persistence-capable classes is two-fold:
1. Changes to persistent fields are automatically detected and reported to the PersistenceManager responsible for the instance. This allows for extremely efficient change management. The alternative of "diffing" two object graphs is very inefficient.
2. Navigation off the edge of an object graph transparently can only be done by detecting references to uninstantiated parts of the graph. This is not an issue if your object graphs are small, and can be bounded easily; but if your object graph is highly connected then the closure of the graph can easily be most of your database. This will be the case if you naturally map from a relational database to a Java object model. Most of the tables are related (hence the term "relational") directly or indirectly to most of the other tables in a schema. Constructing "islands" of object graphs is an anomaly.
THOUGHT sez: "Instead, the CocoBase product can persist an entire graph of objects created elsewhere with no changes to either the object source or binary code."
RESPONSE: This exposes the developer to the same issue as with Serialization as a persistence mechanism. You get an object graph of instances to ship around, and deal with the entire graph for purposes of change detection.
If you assume that the object graphs are Serializable, you are artificially constraining the problem to those Java classes that either themselves implement the serializable interface methods, or can use default serialization.
But JDO allows transparently storing instances of classes that do not support serialization, and for classes that are serializable further allows identifying fields in the object model as being transient for serialization purposes but persistent for storage in the database. This solves the issue of serializing the entire database, if you have a natural Object Model.
THOUGHT sez: "The increased complexity and developer time to create and maintain such a large code base can quickly become unreasonable."
RESPONSE: The complexity of enhancing a Java .class file is on the order of compiling the file. There's no extra code for the developer to manage; it's just an extra step during either development or deployment (depending on the tool). And if you enhance using one tool, you are done. The classes can be used with any JDO vendor's implementation.
THOUGHT sez: "Many consider the designs of ... JDO to be counter to Java's open access - cross platform - component reuse design intent."
RESPONSE: Besides yourselves, can you be more specific? Highly-placed sources within the industry, speaking on deep background, perhaps?
It's unfortunate that in order to promote its Cocobase product, THOUGHT felt it necessary to make inaccurate statements about competitive technology.
You go, Craig.
I've been using the ODMG Java binding for a number of years, and, once I overcame the addmitedly uncomfortable notion of my source code executing more than what I wrote, I sat back and enjoyed the productivity. I've been aching for a persistence story that integrates with EJB transactions and security, and JDO (along with JCA) is it.
I am seeing more and more where folks are simply abandoning entity beans in favor of direct database manipulation via session beans; my personal wish a couple of years ago was to use mainuplate fine-grained persistence capable classes in session beans, where my session beans represented services.
The only system I've seen that I was impressed by because of its persistence story was GemStone/J, but now that Brokat has killed it, it's really not an option until someone else (or Brokat) resumes the industry's faith that they will continue the product. GemStone/J's JVM had the persistence code in it, so that there was no class enhancement; the tradeoff was that you had to use their JVM.
Anyway, I would strongly, strongly encourage proponents of entity beans to check out the ODMG Java binding (a great implementation is Poet's, www.poet.com, with a great trial product) as well as JDO, comparing specifically the amount of code that the developer is forced to look at. With enhanced classes, you leave the persistence code to the experts of persistence, and the business object code to the experts of business objects.
How much fun is it to write an O-R mapping layer yourself, anyway? For me, not very much. Furthermore, the cost your company will incur in developer & tester time to write one O-R mapping layer for one object model will probably exceed what it costs to buy your persistence capability that will work for all object models.
Sometimes I wonder whether entity beans were thrown in to one-up the competition (Microsoft Transaction Server). After all, session beans have a high degree of correlation with MTS components (often stateless, short in duration, accessing managed resources, and propogating transaction and security contexts).
Anyway, that's my $0.02.
I think entity beans were added to provide an component-oriented representation of business objects .. something that MTS doesn't have (MS has never been very object-centric). Unfortunately, Sun really underestimated (imho) the problems with integrating with a datastore, hence the problems with both BMP and earlier versions of CMP.
Regarding GemStone/J, I agree it was really one of the best solutions for transparent persistence -- and from what I've heard they're going to spin the cache out into a separate product (hopefully)... I'd really like to see the smalltalk product preserved too (or open sourced)...
But, more on topic, transparent O/R mapping is something TopLink has been doing for quite some time.. it requires no post-processing, just a configuration file. It scans changed fields and compares the changes with a snapshot to determine the SQL statement.
On the bright side, this is good for CocoBase, as the main problem I've had with it in the past is the gobs of generated code it used to require.
Thanks for the response.
We like the GemStoneJ solution quite a bit because it is
non-intrusive into the bytecode. We told the founder of
ODMG when he was on his JavaBlend kick that bytecode
invasiveness was a really lame idea for Java. We still
believe it to be true, and just to demonstrate it at the
time we actually built a Transaction object that mimicked
the ODMG APIs, but wasn't restricted to its limitation.
The ironic part is that our solution is incredibly similar
to the JDO/ODMG Transaction API behaviors, it just does it
without any object model intrusion.
We're aware of this from Toplink. We actually were the
first vendor we were aware of to do this model. At the
time we introduced it Toplink still required proxy objects
to navigate and we built an external system (about 3 years
ago) to navigate. What we've done recently is to take that
to the next level by allowing simultaneous models to
dynamically be used with the same application and thereby
'morphing' behavior in terms of lazy/non-lazy loading, etc.
And to do it for very complex object models that can either
be local or distributed. So we can do things like transfer
the entire object graph (with cycle detection) to a client
from an Entity or Session bean, work on the local copies
and transfer the copies back to the server in a single
network call if we wish to commit the changes. This offers
significantly better performance than the traditional
You mention about gobs of our own code, you may have missed
our 'generic' code generation targets and CocoNavigate
system. This has been evolved into what we call the
Navigator which is an 'instance' based navigation system
that's well suited to working inside J2EE containers as
well as locally in applets/applications...
We can also generate these models out of an existing UML/XMI
model in seconds and immediately persist the java classes
without any intrusion into the model or bytecodes...
This also means that customers can do things like build
generic 'session' beans that can manage any number of new
java objects on the fly without actually having to write
specific management code. It GREATLY simplifies network
programming with complex object models...
Thanks for your posting and if you haven't taken a look at
CocoBase in a while you might want to give it a spin!
"This betrays ignorance of the JDO specification. JDO requires that the persistence-capable classes be *binary
compatible* with all persistence-capable classes from all
JDO vendors, including the freely available binary distribution of the JDO Reference Implementation. Part of
the test suite for JDO compliance is verifying that the JDO
implementation operates correctly with the Reference
Enhancement of application classes. "
THOUGHT Response: Actually the minute the java classes
are processed and used by a given vendor, no other vendor
can use those instances. Also the classes must be either
bytecode manipulated or inherit from a class hierarchy to
persist them with JDO. With CocoBase no such manipulation
or 'enhancement' is necessary. Part of what we were
referring to is the fact that CocoBase can interoperate on
pre-existing pure object model generated unmanipulated java
classes that any number of other vendors can simultaneously
use. So for example we work with the Javlin product which
does ODMG byte code manipulation, and we can in fact track
and persist instances that are being managed simultaneously
by Javlin. Similarly we can co-exist with 'copies' of
objects, not just raw instances. This 'non-intrusive' model
is more appropriate and less intrusive than your
specification which isn't necessary for Java or O/R mapping
although it probably is useful for C++ or object databases..
"The reason JDO inserts special code into the persistence-capable classes is two-fold:
1. Changes to persistent fields are automatically detected and reported to the PersistenceManager responsible for the
instance. This allows for extremely efficient change
management. The alternative of "diffing" two object graphs is very inefficient. "
This is a very misguided and inaccurate assumption from many
perspectives. While notification might 'academically' seem
like it's more efficient at first, but in fact with a
network the overhead of dinking the server for every little
change is ridiculously high overhead, and means that there
will be a round trip network connection for every
single attribute changed or changed back no matter whether
those changes are committed or not...
The assumption that 'diffing' objects is very slow.
While some implementations might be relatively slow,
ours isn't. It's also much faster to diff a graph than to
even make a single network call often, so these assumptions
aren't very real or very accurate. If we had been consulted
on this issue, then we could have cleared this up before
it became the prevailing wind around this specification.
As always there are many ways of accomplishing tasks, and
as it happens ours just happens to be very efficient.
"2. Navigation off the edge of an object graph transparently
can only be done by detecting references to uninstantiated
parts of the graph. This is not an issue if your object
graphs are small, and can be bounded easily; but if your
object graph is highly connected then the closure of
the graph can easily be most of your database. This will be
the case if you naturally map from a relational database to
a Java object model. Most of the tables are related (hence
the term "relational") directly or indirectly to most of
the other tables in a schema. Constructing "islands" of
object graphs is an anomaly. "
Wow this is such an inaccurate and unsubstantiated statement
that I don't know where to begin. First of all just because
there's a relationship doesn't mean it has to be loaded.
The lazy load model is quite common from O/R vendors and
totally destroys any concerns about this. Perhaps issues
such as this haunt ODBMS vendors, but not O/R layers such
as CocoBase which lazy load quite nicely.
Developers do not need to load an entire complex graph
of objects from a relational database to build any
application in java. It's quite appropriate and normal
for them to lazy load only those portions of a graph
that they may need at any point in time. Most developers
who've built a distributed or even local app understand
Not only is this an irrelavant statement, but if this is
the justification for model intrusion it should be revisted
and the thinking should be radically adjusted...
This is not a real issue in the real world...
"This exposes the developer to the same issue as with Serialization as a persistence mechanism. You get an
object graph of instances to ship around, and deal with the
entire graph for purposes of change detection. "
That's wrong. Maybe that's how some developers less
familiar with O/R might have initially implemented
this, but it's not how CocoBase works. It sounds like the
folks who built JDO are more familiar with ODBMSs and
aren't very familiar with Java O/R tools based on this
CocoBase can automatically ship around any instance from an
object graph or the whole graph. And as for how efficient
the whole graph is in being processed, it's much more
efficient than you think and it's much more efficient than
the ridiculous JDO model which would bog down network
traffic with individual network calls for each and every
operation. CocoBase is designed for Enterprise systems,
but the JDO specification would break down in a distributed
environment very quickly based on my review of it. But if
we had been asked, we would have stated it months ago...
This isn't the first time I've heard this argument, and
perhaps it holds water for other vendors, but not for
"If you assume that the object graphs are Serializable,
you are artificially constraining the problem to those Java
classes that either themselves implement the serializable
interface methods, or can use default serialization. "
Oh please. First of all typically working in a distributed
environment the classes are already serializable. Any
junior Java programmer knows this..
And the default serialization doesn't do O/R mapping,
otherwise developers wouldn't need O/R mapping tools
It's becoming increasingly clear to me that while the
JDO specification may have strong ODBMS underpinnings, it
still needs significant work both politically and
technologically before it will actually be useful for O/R.
The belief stated in the response that serialization is
somehow O/R mapping is kind of disturbing in its lack of
understanding of this problem space and explains a lot...
"But JDO allows transparently storing instances of classes
that do not support serialization, and for classes that are
serializable further allows identifying fields in the
object model as being transient for serialization
purposes but persistent for storage in the database.
This solves the issue of serializing the entire database,
if you have a natural Object Model. "
Oh come on that's the most artificial argument that I've
ever heard. To make a transient field persistent is a
strage idea. They contradict each other. If it's transient
it should NEVER be persistent. As for serializing the
entire database, perhaps that's how ODBMSs would write the
application, but it's inconsistent with both how the real
world works, or how CocoBase works. This response seems
desperate to describe both artificial and irrelevant issues
that don't even relate to REAL distributed or relational/
object programming... The reasoning in here isn't even
relevant to real problems...
(continue on part2...)
(part 2 continue... read part1 first)
"The complexity of enhancing a Java .class file is on the
order of compiling the file. There's no extra code for the
developer to manage; it's just an extra step during either
development or deployment (depending on the tool). And if
you enhance using one tool, you are done. The classes can
be used with any JDO vendor's implementation. "
Actually this means there'll be two versions of the java
classes. This is a maintenance issue, and it's intrusive
and limits JDOs usefulness. If a developer wishes to use
a mix mode app where EJBs are to be used with something
like a 'state' object and the client app or session bean
also needs to use its persistence to another source, the
transactional and context conflicts that could occur
are mind boggling. The JDO 'transparence' doesn't give
enough control the developer and therefore constrains them.
That's a nice thought that customers will be able to
use any vendor's implementation. Is that kind of like the
ability to use an EJB bean with any vendor that the EJB
spec also requires? There's the specification and then
there's the reality. There are a lot of things that JDO
doesn't yet address, so vendors will have to extend their
products to compensate for this. So while they may work
to pass your compliance tests, good luck to customers who
actually have to port a real application!
With JDO in its current state, based on what I've seen,
there's no way that a vendor could implement 'real' O/R
mapping without making extensions... Heck even the open
source JDO vendors have backed away from using the current JDO spec and only ODBMS vendors actually support it...
The statement that this is 'no real effort' is especially
ridiculous since CocoBase a real product available today
can already do that WITHOUT bytecode manipulation. In
otherwords we have a spec that's promising in the future
what vendors already ship today with less intrusion. We
have also had customers who looked at the initial JDO
implementations and abandoned it because they couldn't even
debug their own apps after bytecode instrumentation! That
hardly sounds more manageable or unproprietary to me...
If a processor modifies bytecode with proprietary
instrumentation that prevents proper application debugging,
then that's a really bad design in my opinion.
"Besides yourselves, can you be more specific?
Highly-placed sources within the industry, speaking on
deep background, perhaps? "
How about customers. You know - people who actually build
applications - and not just standards... How many customers
have implemented enterprise applications on JDO? How many
of the architects of JDO actually built O/R mapping tools
that are used by the industry? How many times did you try
to consult real O/R product vendors when constructing this
standard? How many customers did you actually ask whether
they wanted object model intrusion or not - or did you just
tell them that it was good for them and you knew best...
Since we weren't consulted, and noone in the JDO group
can be bothered to consult vendors who actually have to
deliver real products - it's hard to see how we could
possibly adopt a specification built in such void of input.
Especially in light of the uninformed critical postings
such as this one from you.
"It's unfortunate that in order to promote its Cocobase
product, THOUGHT felt it necessary to make inaccurate
statements about competitive technology. "
This is what psychologists call 'projection'. Actually if
JDO is what you say it is, why would it be competitive
technology? Unless you're trying revive some dead product
for JDO??? My understanding was that JDO was an open
specification? Or is this another typical 'reference
product' that will simply die from a lack of care and
feeding? If so is it a technology or a specification that
you wish us to adopt? Are you finally interested in our
feedback? And if so why haven't you worked with
us to take our feedback to the specification? You may have
grand visions of JDO, but at this point it's only a
slightly modified ODMG specification that's about 10 years
old and about 10 years out of date for Java. If you want
us to support it, it needs to advance a decade, and you
need to take our feedback seriously instead of criticizing
it. We're the vendor with the market accepted O/R tool,
not you. The reason we're successful is that we listen
to customers and provide real value.
Being combative to the very vendors that you need for your
new specification to succeed, and not worrying about what
customers actually want will cause your specification to
flop like those before it. When you stop talking and
start listening to the marketplace then maybe JDO will
change to reflect those values...
It may suprise you that we actually wanted to support JDO
despite the fact that you never involved us in the
creation of the specification - a HUGE oversight on your part in my opinion!
We actually took the time to investigate the specification
and found that it would destroy the performance of any
application that an Enterprise customer might wish to
build, and that it was neither transparent nor non-intrusive
and that it was completely devoid of any relationship to
O/R mapping... It's an Object Database specification, and
while ODBMS vendors that don't already have a query language
such as SQL might like it, it's way to much overhead and
too poorly architected for enterprise development. The
double parsing of OQL and SQL is a horrible design for
relational O/R mapping, and will seriously compromise the
performance of applications!
Your arguments for why it has to be architected this way are
completely flawed and poorly argued. We prove the contrary
and demonstrate that in fact you can develop TRUE
transparent persistence with no bytecode mangling or
class hierarchy intrusion, and that it can be fast and
Next time you wish to promote a specification in an
established marketplace you might want to consult the
companies that actually deliver products in it. We have
a much broader understanding of the marketplace than you
could possibly have, after all we are the market leader
in distributed O/R mapping for the Enterprise.
For you to so arrogantly discount our expert knowledge is
horribly unprofessional in my opinion.
We'll be happy to give you our feedback on JDO 2.0 if it
ever is made, but JDO 1.0 is currently unusable for our
customer base, and contrary to your arguments it is not
a 'competing' technology any more than EJB is, but it is
an ill conceived specification for the purposes of O/R
mapping - and as such I can't endorse using it for that
task in good conscience!
This assessment of our stance is seriously deluded and
needs a major reality check! We're responding to customer
demands as any good business does, and honestly the market
doesn't seem to care about JDO, but it does care about high
performance and non-intrusive O/R mapping - something we
This flame war should have been carried out in private and
not in such a well respected forum. Please refrain from
making these public hit pieces on a forum such as this
to promote your pet projects unless you do better research.
This doesn't reflect well on you, Java or Sun to be making
these kind of snipes at a partner of Sun's and a longterm
friend of Java. It should be beneath you, and I'm shocked
at your desperation at making such a stab at our products.
If you want JDO to succeed, work with us. It's the only
professional thing to do... We've always been a huge
supporter of standards, and this stab is both inappropriate
and petty... We'll be happy to support JDO if it evolves
into a useful specification for O/R mapping. Until then
we leave it to the ODBMS vendors where it is MUCH more
<laugh/> Dude. Chill.
We just got done with an eval of CocoBase, and we decided that, *regardless* of the underlying O/R tool, the Swing GUI that came with it was *way* too beta to foist off on our less experience developers. This was coupled with the fact that when we unjared the product, the file system layout was *absurd*, and obviously full of crappy legacy decisions. (What's with update406.zip in the root dir? And why do you guys only buy half-way into ant, instead of actually using it in a rational manner?)
But what really stopped us from using it was the attitude of our sales guy. He absolutely blew his stack when we brought these problems up. It was comical, being yelled at and insulted on a sales call. The only explaination we could fathom was that there was a culture problem at the company, which must have been coming from up high.
You, my friend, just proved that.
P.S. Take a peek at http://jakarta.apache.org/ojb/
i can 100% agree to your posting!
..... but there ist on BIG problem:
what persistence layer (for production) to use?!?!
- EJB 2.0
- OJB from apache is not yet finished :-(
- 'home grown' mapping tool (but what about clustering, ...)
- JDO implementation (which one)?
can you give any hints?
what do you use (and why)?
how to be database independent?
Is this an advancement of your coconavigate system.
Absolutely it is Sanjay. It's a new Instance based version
that's better suited to working in a fragmented and
threaded server environment.
It's also now integrated to work with UML/XMI so the process
of defining the transparent persistence just became a cake
walk! In fact if you already have it defined, it's about
a 30 second process to import your model into CocoBase and
even to generate DDL for your database if you want...
How would you two compare Castor's JDO vs. Sun's JDO and O/R tools such as Cocobase? Castor's JDO is a different implementation than Sun's JDO. Not sure how many companies out there are using it, but it has been around the block for almost two years.
CocoBase is truly non-invasive. Any JDO solution (Castor
or Sun's) is inherently invasive because of the JDO spec.
CocoBase believes transparent persistence means that the
object model is unware of its own persistence (hence the
term transparent), but JDO requires that the Object be
modified either by coding or bytecode mutilation to meet
this requirement. It's unnecessary, invasive and ill
Also CocoBase is a mature product that focuses on
portability, performance, integrations and all of the of
the other things that customers need in a product. Both
Sun's JDO and Castor are basically research products at
this point. One customer recently reported to us that
their Overhead with Castor ran 100 times slower than their
hand coded jdbc code. Compare that with CocoBase which
if properly configured has only about a 1 to 2% overhead
compared to hand coded JDBC...
That has everything to do with the fact that the product
was crafted based on the requirements, architecture and
features and limitations of the Java language and JDBC.
The product reflects good engineering practices and a
desire to have an infrastructure that runs as good or better
than anything the developer would hand code. The CocoBase
Enterprise O/R product for Java is almost 5 years old, and
it's difficult to reproduce that mature of a technology in
a basement in a few months - even from the best engineers...
Just my $.02
I just found this article with the help of the new JDOCentral site. If you are still reading along:
If Cocobase uses serialisation, how does your engine
handle modifications to class files?
I like your point on non-invasive persistence. We
are going the same way but our engine uses reflection
to store objects. Accordingly you may modify your
classes as you wish and our engine will always hold a
superset of all stored versions of classes. I don't see
how this could be possible with serialisation in a
living and moving developing application.
Having read the complete thread, I do think it would
be more appropriate to mail with Craig in a more polite
tone. I do think it's great that Sun cares about
object-relational mapping. All the companies that want
to drag the industry away from using pure SQL
db4o - database for objects
This is a general question with J2EE that luckily we don't
have to deal with ourselves, as it's an issue for the app
server itself. Since we're not supplying the serialization
and instead the app server and jvms are doing that for us,
we assume (reasonably so), that this will be correct.
You're correct that app servers & classpaths can get out of
sync, but that's true anytime j2ee components are used,
whether it's an EJB's remote interfaces, or an object
persisted by the session bean.
Our new release due out later this week further enforces
the concept of a 'dynamic facade', where a developer can
move from a 2-tier to a 3-tier Session based application
at runtime! No need for an application recompile - pretty