Interview with Marc Fleury WRT problems with EJB 2.0

Discussions

EJB design: Interview with Marc Fleury WRT problems with EJB 2.0

  1. I wanted to open a discussion with regard to Marc's comments (see the interview in the "Hard Core Tech Talks" section) on what is problematic in the EJB specification. He pointed out two things that only have to do with entity beans:
    * the "persistence specification in [EJB] 2.0", and
    * the "cache policies in EJB, [...] the option A, option B, option C".
    I found that very interesting, as I have always considered entity beans problematic since EJB 1.0. There were three main reasons I critized entity beans in versions 1.0 and 1.1:
    * they were only accessible through remote semantics,
    * they were very specific to relational databases, and
    * they introduced substantial code bloat.
    Now, in EJB 2.0, at least my first criticism has been addressed with the introduction of the specification of local interfaces. However, the second two still stand.

    If one were to use EJBs in a project, I would endorse a design that emphasized message-driven and (preferably stateless) session beans that took coarse-grained request objects, manipulated fine-grained persistent business objects based on the content of the request, and returned results, if required, using coarse-grained response objects.

    The only specification that is viable in doing this is JSR 12, JDO. It is intended that persistence capable classes be used locally, and introduces some additional code to your persistent object model, but not a substantial amount. Granted, it tightly couples your objects to JDO, but it does not bind you to using relational or object-orinted databases, and it is possible to switch JDO vendors with little impact, if any, on your model. There are several O-R mapping implementations being readied for release, several OODBMS implementations, and even one legacy system implementation (SAP) (see http://access1.sun.com/jdo for pointers to implementations).

    I realize that there are certain areas where entity beans might provide a good solution now, one of which might be to cache commonly looked up information, long descriptions from IDs, etc.

    A criticism I hear about JDO is that it requires postcompilation bytecode enhancement of your persistence capable objects. However, the notion of aspect oriented programming seems to be gaining a lot of steam, and JDO implementations' use of bytecode enhancement can be thought of as the introduction of persistence aspects. Let's face it, the code to implement persistence has to be somewhere -- there's no magic. However, it is essentially boilerplate code that can be automatically generated and acknowledged, but need not be overly visible by either the persistent object model authors or the authors of the message-driven or session beans that use them. Other than finding the point of entry into the graph of objects to be manipulated by the MDB/SLSB/SFSB, there is very little required of the bean author to achieve persistence; simply using the model's objects and their methods is sufficient to persist the changes made during the transaction.

    I can't possibly give a full treatment of JDO in this discussion, so take the time to look into it deeply, as a surface reading of the specification is not sufficient to recognize the benefits that it brings with it.

    Marc Fleury, if you're listening, please comment on JDO.

    Floyd, Rickard, or anyone else from TheServerSide.com, I would really like to see an interview with Craig Russell, JDO spec lead ASAP, given the impending finalization of the spec.

    Thanks,
    Matthew
  2. Matthew said:
    >Granted, it tightly couples your objects to JDO

    It's more your factories which are tightly coupled -- i.e. the persistent objects don't need to know that they are JDO PersistenceCapable, but the code which creates and deletes them does...

    Tom
  3. Your persistence capable objects do not need to know anything about JDO unless, of course, they implement javax.jdo.InstanceCallbacks, which, IMHO, pretty much every object should, as the jdoPreDelete() method is very important when it comes to maintaining model integrity.

    In general, when an object is deleted, it should ensure that it deletes all those objects that it composites (in the UML sense of the term), and notify all objects that reference this object that it is going away, so that the referecing objects may dissolve their collaboration with this object.

    --Matthew
  4. First, I want to address your point of view on the problems with entity beans. The reason for communicating to the entity bean using remote semantics is that the data may reside anywhere. However, JBoss has a setting where RMI is turned off for internal calls as an optimization (no serialization penalties). I believe other app servers are doing this now as well. As for entity beans being specific to relational databases, I just don't see that. In a system I created, we read our entity bean information from an LDAP server. I used BMP and created DAOs for each entity bean. Despite the enormous effort to hide this kind of work from the developer, the effort was trivial. As for your third issue, the CASE/IDE tools are starting to model the code better. I understand that the extra code can get out of sync since the remote and home interfaces are not directly implemented, but this only seems to be an IDE issue. However, since most app servers now have a hot-deploy feature, you simply run your ant script which builds and deploys in a single shot. You quickly find out if anything is wrong with your EJB if the app server rejects it. If you are using CMP, then you must not be too worried about your data model, which is a big mistake for maintainence reasons. Even with the latest changes in CMP, I just don't see it as a viable solution.

    Your design recommendation to go with a message-driven bean based design is a good one. This kind of design can make scaling easier since the messages can be balanced between multiple servers by the app server's JMS message handler/MBD invoker. However, I caution you on your choice to make coarse-grained objects. I understand that you don't want a lot of hits to the database, but remember that the EJB container is forced to tie that entity bean to a transaction and let no one else use it, until the transaction commits. If the entity beans become too coarse-grained, you lower your efficiency because many threads have to wait for the bean to become free. Also, you increase the possiblity for deadlock.

    Jason
  5. <Jason> The reason for communicating to the entity bean using remote semantics is that the data may reside anywhere.</Jason>

    I maintain that remotely manipulating fine-grained (not coarse-grained) objects is not a good idea due to excessive network round trips, which by now is common knowledge. I am aware that any good app server does allow pass-by-reference semantics if you want it, which is now fully supported by the new local interfaces of EJB 2.0.

    <Jason>I caution you on your choice to make coarse-grained objects. I understand that you don't want a lot of hits to the database, but remember that the EJB container is forced to tie that entity bean to a transaction and let no one else use it, until the transaction commits.</Jason>

    Two things here. First, transactions: the whole idea is that the developer should strive for short transactions. Long transactions tend to lock resources for unacceptable periods of time. Keep it short and sweet by using SLSBs and short-lived onMessage methods of MDBs. Second, entity beans: I agree that the locking issue of entity beans would be problematic. Point taken.

    --Matthew