A new article by Chris Richardson, "Speeding Up J2EE Development and Increasing Reusability Using a Two Level Domain Model", describes a design strategy that accelerates development and improves reusability by structuring the domain model into two levels - a Plain Old Java Objects (POJO) level that implements the business logic and an entity bean level that implements the persistence.
Read "Speeding Up J2EE Development and Increasing Reusability Using a Two Level Domain Model"
I't very nice to see a design strategy that considers carefully the importance of testing and debugging, since these things that are often badly under-rated in architectural approaches which focus only on the final system.
But a key problem with Entity EJBs is the inheritance problem: domain models typically use inheritance (as good OO design should), yet implementing inheritance with Entity Beans is rather difficult.
So my question is: since Entity Beans are here used as dumb data access objects, and since Entity Beans aren't very good as dumb data access objects, why not drop Entity Beans and use an O/R mapping tool that simply gives you data access objects? What are Entity Beans really getting you here (aside from buzzword compliance)?
Seems like a nice idea - would it also enable a flexible (and possibly dynamic) architecture to be imposed on the Java business logic classes? E.g. You could choose/change the component types and tier etc to map the Java classes onto...
In reply to Sean's question "what do entity beans give beyond buzzword compliance" I think the answer is very little in most applications. Once the idea of entities having remote interfaces is abandoned (as it effectively is in EJB 2.0) it's hard to see why persistent objects should be modelled as EJBs. And the complexity surrounding entity beans is becoming ridiculous for what they offer, in comparison to other O/R mapping approaches.
I think using Plain Old Java interfaces is a powerful way to decouple business logic from the details of persistence. There is a lot going for ordinary interfaces, in comparison with J2EE-specific interfaces.
I discuss the issues around entity beans and using Java interfaces this way in my new book, Expert One-On-One J2EE Design and Development
(Wrox Press). It includes practical examples, and discusses efficient options for RDBMS access, including how to use JDBC effectively.
I think JDO is a more promising O/R mapping approach than entity beans. Unlike entity beans, it doesn't tie persistent objects to the EJB container, and it doesn't usually require the implementation of special interfaces.
Entity EJBs offer lot more than mere persistence. It relieves the component developers from worrying about instance management, life cycle management, transaction management, location transparency, qualito of service, security management(remote references) etc. In my expreience what I have found is using entity EJBs for persistence improves productivity by leaps and bounds as compared to writing a bespoke persistence framework. The most important thing is you don't need to spend a lot of time testing your custom code that provides system level services, which are provided by the container in the case of entity EJBs. IMO transactional data persistence should always be implemented using entity EJBs rather than bespoke code.
On Rod's book on J2EE Design and Development, I have reviewed it and it is one of the best J2EE material I have ever read.
With the introduction of LocalInterfaces the speed factor using Entity Bean has improved considerably. But the only drawback is the remote and local interface cannot be interchanged easily. It has to be done programatically. So a bit of planning is required here to make sure the localinterface are being called by beans in the same container. Also with the improvemet in EJBQL especially in Weblogic, even nested queries are supported almost like SQL. So with container taking the burden of transaction management etc and improvemnet in Querying and performance why not use Entity Beans ?
Entity beans just doesn't make any sense. On my current project, I wanted to use JDO initially when I came to know hibernate. Its a wonderful product (like most of the open-source) and uses reflection as opposed to bytecode enhancing in JDO. Its fast and comes with very good documentation. I guess if you can live without a standard like JDO, you can consider hibernate or other O/R tool.
I completely agree, about the EB. I find no use for it. In my current J2EE project, I have SLSBs backed by a persistence and caching engine with OR mapping.
Since Entity Beans are here used as dumb data access objects, and since Entity Beans aren't very good as dumb data access objects, why not drop Entity Beans and use an O/R mapping tool that simply gives you data access objects?
I also don't see a benefit of Entity Beans here. IMHO it would make more sense to turn it the other way round, if you choose to use EJB: Session Beans for business logic, POJOs and a O/R toolkit for persistence.
Of course, using Session Beans only makes sense if you need remote accessibility, server-side state for rich clients, declarative transactions at the facade level, or declarative security. If you just need the former, you still have the choice between Stateless Session Beans and a decent remoting toolkit like GLUE or Caucho's Hessian.
So the simplest approach is using POJOs for business logic classes, with plain Java facade interfaces, and POJOs for domain objects, with JDO or a decent O/R toolkit for persistence. Of course, you can and should use JNDI datasources and JTA transactions, but preferably only behind your facades.
Some people tend to associate a clean client-independent middle tier with EJB. In reality, choosing appropriate patterns is the most important issue, and sticking to the KISS rule. I guess such a simple approach is a viable choice for more than 90% of J2EE web applications.
Using this does provides some things, but it has a serious performance drawback - i.e. I hit the database N+1 times for each query rather than just once (once to load the base, and then to load the specific).
I can't cast either, so if I have some specific operations/attributes I want to use, I have to go & do create again, even though I have the object already (admitedly, with good caching strategy it should not hit the database, but is still ugly and somewhat inefficient).
I think that using entity beans (EB) as a persistence mechanism is wrong. I would see an entity bean more like a stateful session bean with yet better state management, surviving the shutdown/crash. That is, they _require_ persistence, but do not aim to _provide_ it.
Someone (I think it happenend at Sun, even before it was released to public) had the idea though that it can be used instead of O/R mapping tool and so here we are, trying to put square peg into a round hole.
The problem now is that a lot of people see it as such (i.e. the idea was sold to them by their favourite application server vendo), and it's hard to persuade a client who just put out a lot of money on an app server to spend about as much on an decent O/R tool.
From this point of view, EJB 2.0 made it even worse, as they abandoned the component idea entirely (sorry, order line is not a component, nor is order) and started moving towards semi O/R tool.
* Example is trivial. I'm sorry but 5 odd domain objects isn't proof on how efficient it is to develop this way.
* The test data will be somewhat contrived and limited and also .
* Massive duplication of effort, homes (DomainManager factory ) etc. Obviously doesn't show in this example, but what happens when you have 100+ domain objects ?
* What app server was used ? Using something like JBoss or Resin-EE to test local entities is usually quite efficient. On my P4 2Ghz notebook with jikes compiling stubs, redeploying only takes a few seconds. You tend to become more inefficient if you are regularly making small changes, compiling-deploying-testing. Rather, make more smaller changes then redeploy, or only make one larger change then redeploy. It's about balance.
* If possible, business rules should be generalised/abstracted into seperate classes so that they can be reused in different domain objects, and possibly in ui validation.
This design consideration addresses an often overlooked design requirement when folks rush into a J2EE implementaion. So cudos to designing a domain model outside of J2EE specific technologies (e.g., EJBs).
An approach we've used is implementing our Domain Model with (plain old java objects) POJO and having a PersistenceManager that delegates how the objects are persisted. We are then flexible on the persistence mechanism (EJBs, DAO, JDBC, Serialization) and our domain objects are agnostic of the implementation details. We maintain business logic within the domain objects, while decoupling from container technologies, which will always change.
PersistenceManager pm = PersistneceManagerFactory.createPM();
.. other CRUD persistence transactions implemented by PM
Even with complex relationships, your PersistenceManager handles the mapping from Object to Persistence Mechanism structures (RDBMS, EJBs, OODBMS, FileSystem, Legacy Integration). This is especially beneficial when the Domain Objects don't map directly to how your database was designed or your CMP Beans. This concept is similar to JDO but not exactly.
Having EJBs inherit from Domain Interfaces can become a challenge with complex relationships.
Again, I think this is a good approach for certain implementations, and is a good step toward decoupling business logic (intellectual property) from technology implementations that can change often (J2EE Technologies).
I don't understand why JDO was not considered by the author in place of a POJO-over-entity-beans architecture. JDO can be easily plugged into the architecture here, fronted by coarse-grained session bean (and message-driven bean) facades, all with full transaction & security integration a la JCA.
The great benefit of JDO is that you can almost completely transparently persist POJOs into any datastore you'd like that has a JDO implementation built for it. This includes pretty much every relational database, thanks to JDBC-based JDO O-R mapping implementations (see Kodo, Lido, ObjectFrontier, JDO Genie & others), as well as more esoteric datastores like object databases (Poet, eXcelon, Versant & others).
This is the biggest benefit of JDO: a Java-standard way to achieve persistence for Java objects in datastore-agnostic ways. It is not completely, 100%, absolutely transparent, but it's pretty close. I would see the vast majority of JDO efforts beginning with a JDO implementation over a relational datastore until the model becomes complex enough or system load becomes high enough that the introduction of an object database would yield significant performance gains.
A very good approach to isolate the persistence tier from the domain tier, but I would simplify the POJO domain tier using Business Interfaces.
This avoids business method duplication in, for example Restaurant Domain Object.
I think the most powerful of this approach is the ability to use the persistence mechanism as a plug-in.
I am using a very similar approach in a current project. The difference with the described strategy is that in our project the domain classes (POJOs) are not abstract classes extended by local entity beans. They are freestanding first-class objects which are wrapped by a JMX facade and plugged into a JBoss microkernel. The reason for this is that the domain model is manipulated by a CPU intensive AI algorithms. The performance penalty with local interfaces would be too much.
In this architecture the session facade is a kind of hub. It takes client requests, forwards them to the domain model trough MBean server, updates persistent state by calling entity beans and publishes changed state through JMS topics to Swing clients.
Some thoughts to share on this ...
A J2EE server may distribute any given J2EE component to multiple JVMs, and care needs to be taken writing J2SE classes(POJOs) for subsequent use within a J2EE component, to ensure that the POJO behaves consistently across the single JVM standalone (test) mode and when used as part of a J2EE component distribution deployed in a potentially multi JVM container.
Although J2EE builds on top of J2SE, certain programming restrictions apply to EJB components --ideally requiring them to avoid certain J2SE code. ( eg. use of static non- finals, synchronization, passing of this reference as a parameter, return of a this reference, etc.) Most of this to acknowledge & address the fact that the J2EE environment is potentially multi-JVM...
This aspect is not enforced (ie. J2EE server providers must not physically subset the J2SE APIs to enforce restrictions) & therefore the need for the programmer to be mindful of this aspect.
ALSO, EJB components are meant to be pure "business logic only" components that need to do
* access resources
* read/access constants
* manage transactions
* enforce security
in a manner prescribed for the J2EE environment-- that is different from the J2SE environment.
To some extent & depending on the particular application, you may work around some of the above aspects by reducing the granularity of the J2SE classes (POJOs), consequently requiring the use of a larger number of POJOs (for a fixed amount of application functionality) -- as mentioned in the drawbacks section of this article.
Developing EJB applications often includes helper classes & utilities anyways & one typically unit tests such helper classes & utilities separately... and code metrics, API level re-usability are factors that come into play when breaking some logic into a separate helper class/utility.
On the issue of consciously "decoupling the business logic from EJB technology" --- If EJB components have always meant to be pure "business logic" components to start with, is the motivation to decouple business logic completely from EJB technology consistent with that ?
J2SE, J2EE & J2ME are separate platforms & often incompatible. The distinction between J2SE & J2ME is easier to recognize since J2ME has -- physically fewer classes, different class definitions for identically named classes, has classes exclusive to J2ME /absent in J2SE....
In the case of J2EE v/s J2SE, the distinction is subtle since the small portion of J2SE ideally meant NOT to be used in a J2EE component is not enforced by the J2EE container/server...
If business rules & business logic include ensuring certain transactional integrity, enforcing certain application level security constraints, etc., well the same is defined & provided by the J2EE platform-- say by using programmatic calls to the EJBContext followed by calls to getUserTransaction (), followed by calls to begin(), comit(), rollback(); ... getCallerPrincipal(), isCallerInRole(), etc. The entire J2EE API comprising on Interfaces only, with the exception of Exceptions...
So tying an application (KNOWN to be a server side application) to a set of J2EE interfaces, given that we DO have dozens of J2EE servers available to us -- many for free .. may not be much of a concern in some instances & the motivation for decoupling the business logic from the EJB components may be insignificant in those instances.... With so many J2EE servers available, hopefully you'll find that one of them meets your performance, scalability & reliability requirements....
1) to me it seems like the interfaces in the POJO layer are really just an application of the "business interface" pattern [Marinescu].
2) business logic should not be in the entity bean layer to begin with. the template methods in the abstract POJO classes should just be private helper methods in the session facade.
my $0.02 ($0.01 per thing),
This article is a great start, but the problem is bigger than just business objects and persistence. There are a number of related aspects that need similar treatment - for example, the web layer (xml serialization, servlets, HTML, xsl, etc) can be based off the same core objects. There becomes a point, however, where it's awfullly expensive to make changes to your objects because the maintenance cost is so high. What really needs to happen is that developers need to be able to *declare* their application. That is, they need to be able to say "my app has objects a, b, and c, and they look like this. My app has components x, y, and z that operate on a, b, and c in the following manner." The rest - persistence, communication, presentation, etc - is just details that should be separate from the app itself.
By combining the pattern described in the article - separation of the application from its implementation - and code generation (think XDoclet on steroids) you can keep all of your business code separate from the implementation of the application. This means you can switch from EJB to JDO or straight JDBC without thinking about it. Just plug in a different code generator, and your app is guaranteed to work. The same holds true for the UI. Want to switch to Struts? Plug in a code generator for struts, and now you have a Struts UI.
Now, the shameless plug. :) There's a project over on Sourceforge that implements an architecture like this. The architecture is called "Structs and Nodes Development
" (SAND); the SF project is Sandboss
. It's a declarative approach to application development that's based on almost 20 years of collective experience in the world of enterprise software. I've used a previous incarnation in production, where it saved somewhere around 5 man-years of development/QA time.