I came accross an article entitled EJB's 101 Damnations. The article, written by self-proclaimed java evangelists, sums up some of the problems with EJBs. Although incorrect on some points, it is still an interesting read.
Read EJB's 101 Damnations
Of course the question is, what alternatives do you have? :)
Here is my personal opinion:
It is an interesting article. Nothing is perfect in this world, of course EJB has issues. EJB is universal tool and of course when you try to resolve some specific problems all its issues became visible. But still, I think in 85% of the EJB utilization (I should say correct utilization) developer will have success and satisfied performance and scalability. And for 15% others developers should either wait for better Specs and/or Java App Servers or design their own Enterprise Application Server.
I have to wonder where this guy is coming from. I'll admit I didn't read the whole thing...so flame me.
I am new to the application server scene, coming from an ASP, then PHP, then Enhydra XMLC, then "fat" jsp, now J2EE. I recently got my first real prototype EJB working in JBoss. I was so dumbfounded at how much work I could save I couldn't sleep for days (well not really, but I dreamed CMP!)
He, and his team, sound like developers who were forced to work with EJB, as opposed to working with it because they want to. When you are forced into a situation you disagree with, you looks for reason why you shouldn't be in that situation. If he and his team were spending the time to come up and communicate these issues with each other, it seems to me they were coming at it from the glass is half full attitude. It reminds me of the RPG programmers I work with who are always grumbing about all this "new" stuff they are being forced to work with, like interfaces other than dumb terminals into their applications. They argue with people about why flat file "database" access/modification is better than SQL (DBMs) access to their data. They are writing a new web interface to their core application in RPG. Do I need to tell you how bad idea it is to be writing a new web application (with a targeted 5 year lifespan, noless) in this day and age using RPG of all languages?
You get out of any technology, or situation, what you expect to get out of it. If you go into your first J2EE app looking for the problems, you are bound to find things which are not 100% perfect. I would like to say I am glad I don't have to work these people, who sound like have a very poor attitude towards things.
While they seem to focus on what is missing, I go home every day just marveling about all the "infrastructure" that magically just "happens" with a small amount of direction on my part.
It isn't that bad to use flat files... what was the comment about GUIs???... i can think of many occasions where a flat file accompanying a database is a powerful optimization... don't be fooled that everything new is such a great advance and the old ways are no longer needed...
some EJB container's don't have 2 phase commit... if you
have one database and one webserver why not use the database's transaction software... it has 2 phase commit... which really is required in real transacions...
EJB 2.0 XML QL in the descriptor files takes away some possibilities for dynameic SQL...
if you are doing some stuff with read only data like genomics or data mining a database slows you down dramatically and it is a read only database... flat files with indices are preferable or even mandatory here... it might make things 10 times faster... one yr instead of 10..
Being a bit new to EJB, and this article covering so much, I have a few questions that I can't seem to find many answers for. I actually read some of it a few days ago. I noticed that this article, like so many others, points out more contraversy with how data is accessed in EJB. There are many ways, and I assume that no one way is the best. There have been many articles here and else where comparing CMP, BMP, and JDBC (or equivalent raw-ish) methods of accessing data. Also there is a lot of contraversy it seems over exactly how much data should be loaded at a given time and how it should be passed amongst the tiers (other than the obvious SQL (no flame about that please, I know there are other better ways in some situations... flat files, xml, etc.)).
Well... in that context...
I haven't seen many people that advocate a mix of the methods. Is there some reason? Is it that I haven't seen this approach because it requires larger connection pools? Accessing stored procedures from a session bean to me doesn't seem like a bad solution if you don't mind binding yourself to a particular server. Is it considered bad form to mix CMP for tables that are small (basicly lookup tables) with a message driven bean for calling stored procedures for longer processes that need optimization and with session beans that want to simply add a row to a table that it won't need to cache because it is a log-ish thing using JDBC. (The longest run-on sentance I've seen :) )
Is it because most people that argue the topic are a bit more zealous than they should be about which way is better that I haven't really read anything about the pro's and con's of each method in a more objective manner or are there hazards that I've not yet tread on?
10-34 address data concerns... (a 2:1 ratio)
Why is 18 a bad thing? Why is it bad to use DAO with EJB???
Can't you avoid some of 19 by avoiding Entity beans when they are slower?
20: Wouldn't it be wise to use the easiest method (CMP) until you see a performance problem (or forsee one), then move SQL into the session where it would help?
21: Can't you just create a new table for meta-data of your business?
24,25,30, and 31 seem to be directed at CMP, when I would probably be using some other method to implement a feature that would have these problems.
29 in particular contradicts my methodology. Why would there be just one strategy? Or is he saying that having multiple ways of doing something bad? Or is he saying there is no formal way of choosing which way you should access the data, and that is bad?
It seems to me that CMP and BMP by definition were made for persistant data caching, not data mining, logging(making audit trails), nor even processing data. Should one try not to use it for more than mostly read only persistant data unless it just makes since in some case else that I haven't ran into?
I'm sure Sun and the app. server people will eventually handle all the issues in some uniform way, but until then, does it not make sence to take the middle road? Some coarse grained (when accessing the data wouldn't make sense without the details), some fine grained (when it does make sence to access only small amounts of data), some DAO from Session beans(When you know more about what needs to be cached than you can tell the appserver), some JDBC from MDB's(when responce may not even be neccisary), and some Entity beans(when generic caching makes sence) as each seem to fit a certain problem better than others.
-Sam actually, flames would be very much appreciated :) If you want to flame me directly, use org.techstacy@sam (anti-spambot, little-endian address)
Well, I was just reading through the "conceptual items", didn't get to read the rest (there are 101 items, after all). Here are my notes so far (addressing the items). All the notes are, of course, just my opinion:
1. JavaBeans were designed to support composable GUI applications, and a local component model in general. The JavaBeans event model in particular assumes a single address space. These are not the design goals, and not the basic assumptions, of EJB. An "elaborate" event model would be nice, but it is impractical to implement it at the same level as the JavaBeans component model. EJB does provide an asynchronous notification mechanism based on JMS. Note that other distributed networking platforms, such as Jini, do not implement their event mechanisms like JavaBeans does, for the same reasons. "Core Jini" by Keith Edwards contains a pretty full coverage of this topic.
2. IMHO the focus was on optimistic locking rather than pessimistic locking. EJB doesn't do pessimistic locking by default, and unless you do something special, you get optimistic locking. That's why we have all these pattern to do pessimitic locking - it's not the key focus of the spec.
3. EJB is not based on object pooling. It is designed in a way that allows object pooling. If it wasn't, there would be no way to pool objects. You could equivalently say "EJB is designed around lazy loading: why do we need these abstract accessors?". We need them because we want App servers to be able to implement lazy loading! I do agree that object pooling may not be so important, but I don't think it's a conceptual design problem in EJB.
4. comments about the specific itme 5-9 follows.
5. RMI gave up access transparency a long time ago. The Jini platform is also based on the concept that "remoteness is a part of an interface". My point is that many people, including myself, think that access transparency is a bad thing. Again, there is a good coverage on this in Core Jini.
6. Well, this is simply not true. EJB vendors can (and are encouraged to) implement fault-tolerance. The most common way of achieving this is using smart stubs. WebLogic, WebSphere, iAS and a lot of other do implement this fault tolerance functionality. In fact, I don't think I know a single commercial App server that doesn't support it.
8. Again, this is not true. EJB doesn't talk about what happens when objects "move", because it doesn't specify a concept of an object that "moves" from one place to another. However, if an App server supports it, it can implement the redirection functionality in it's stubs.
I think items 7 and 9 address real issues, and I think that the problem is that EJB has not placed it in it's scope yet. Either it should put it in it's own scope, or some other standard should. 9 is a little tricky, because EJB currently relies of the DataSource to perform the locking, and I'm not sure its a bad idea. However, I do think that EJB should make it easier to support pessimistic locking and similar strategies - because they are in common use.
"5. RMI gave up access transparency a long time ago. The Jini platform is also based on the concept that "remoteness is a part of an interface". My point is that many people, including myself, think that access transparency is a bad thing. Again, there is a good coverage on this in Core Jini. "
Object transparency is the Holy Grail of distributed OO!
Why is it a bad thing? Allowing clients to use a remote object as if it were local is a big deal. That is what EJB is mostly about; in particular Entity Beans. This is also where EJB falls way, way, way short.
Object transparency is the Holy Grail of distributed OO!
>> Why is it a bad thing? Allowing clients to use a remote
>> object as if it were local is a big deal. That is what EJB
>> is mostly about; in particular Entity Beans. This is also
>> where EJB falls way, way, way short.
Object transparency may be great for OO - it is a nice ideal - but its not practical.
A remote call is fundamentally different from a local call. This is evident by the fact that you have different exceptions to catch (e.g. RemoteException).
If the concept of location transparency is valid, then why do we not use fine-grained interfaces over the network? Why do we have to account for the fact that the call might fail?
While at first I really didnt like the addition of Local Interfaces (coming from a CORBA background, it didnt make sense), after I read some of the justification, it was clear to me that remote objects should be treated differently to local objects - and that the developer must be conscious of the differentiation.
As for the article:
It has become quite fashionable now to bash EJB because it is no longer the newest darling technology (Web Services! Web Services!). Still, criticism is the stimulant to improvement and there is always room for improvement.
"Object transparency may be great for OO - it is a nice ideal - but its not practical.
A remote call is fundamentally different from a local call. This is evident by the fact that you have different exceptions to catch (e.g. RemoteException)."
It is different only because of the implementation(s). It need not be.
"If the concept of location transparency is valid, then why do we not use fine-grained interfaces over the network?"
Because the implementation(s) are bad the goal is bad?
"Why do we have to account for the fact that the call might fail?"
Do you not have to account for local failures as well?
MyY y = x.getY();
What if y is null? You have to do something even though it's local.
"While at first I really didnt like the addition of Local Interfaces (coming from a CORBA background, it didnt make sense), after I read some of the justification, it was clear to me that remote objects should be treated differently to local objects - and that the developer must be conscious of the differentiation. "
Like much of the current J2EE technology, the LocalInterface
is pure hack.
Can you elaborate on the justification?
>> It is different only because of the implementation(s).
>> It need not be.
I am not sure I understand what you mean.
What remoting technology provides true object location transparancy?
Every remoting technology I know of (RMI, CORBA, DCOM, SOAP) all have a concept of a remote exception... something to do with the failure of the remote object, the remote server, the network, the marshalling etc etc.
The fact that you have a separate class of exceptions devoted to remote calls suggests that they are quite different no? (at least to me it does)
Moreover, (more importantly) when you are making remote calls, you have to specifically treat failure cases where you dont know whether the call on the remote object actually completed. Think of a case where you make a remote call, the transaction commits, and then before the server can send your response packet, it dies. You have no way of knowing whether the call succeeded or not. There is absolutely no way to determine when the failure occurred.
This is not the case with a local call. Because you are in the same JVM, you always know what happenned because the one JVM is in control.
Hence the fundamental difference from my point of view (and the view of other people behind LocalInterfaces, Jini, etc).
Its just one of the arguments why Local interfaces are not a hack. They are there for a reason. They are there because you are forced to treat local and remote calls differently. You can try and hide the fact, but it is risky (esp in a transactional environment).
"What remoting technology provides true object location transparancy?"
I don't know of one. That's why I said it's the Holy Grail.
As far as I know true object transparency does not exist. And Entity Beans are a far cry from what we need.
"Its just one of the arguments why Local interfaces are not a hack. They are there for a reason. They are there because you are forced to treat local and remote calls differently."
I don't want to have to treat them differently. Having to do so is why I consider it a hack. Intra-container calls should be easy to implement.
Oh well, I'll keep dreaming ;-)
At the risk of receiving some flaming, COM/DCOM provided pretty good location transparency. Calls to a remote DCOM component were made in exactly the same way as to a local COM component. A registry change was all that was required to change the component to be a remote reference. Local calls therefore were very fast, remote calls use a stub/proxy pair - but DCOM sorted it all out. This can slug performance if you don't realise a component is remote, but we just ensured we serialised calls to a component that could be remote. .NET in some ways has gone the Java way using hand-crafted remoting, but I believe will ensure that if the component is local then it does bypass the remoting and so does not slug performace.
our 'friend' n n might flame you for that but I won't; you make a good point
n n has been a little quiet of late. He is always entertaining, but not always intentionally!
>> I don't want to have to treat them differently. Having to
>> do so is why I consider it a hack. Intra-container calls
>> should be easy to implement
I dont think it is possible to do anything else but treat them differently. Apart from the reasons I have mentioned in earlier posts (errors, performance, transactions etc), Java treats local and remote calls very differently. One is a pass-by reference semantic - the other is pass by copy.
It is not a good idea, I think, to hide that.
Its just one of the arguments why Local interfaces are not a hack. They are there for a reason. They are there because you are forced to treat local and remote calls differently
I wouldn't consider Local interfaces a hack, but I really think it would be more elegant if you could specify the local vs. remote property as a deploy-time option instead of a code-time issue.
"Object transparency may be great for OO - it is a nice ideal - but its not practical.
A remote call is fundamentally different from a local call. This is evident by the fact that you have different exceptions to catch (e.g. RemoteException). "
Oh, come on now! You just have to wrap the exceptions with business-sense ones (like CreditConfirmationException), this is what you want to do *even* if you are not using EJBs at all.
About location transparency being not practical, I guess what you mean is that it can make your system really slow. And it can, but this seems to follow the line of argument of Assembler vs. Fortran, C++ vs. Java, etc., namely "I have not enough control, it is slow", but in the end it makes you (the programmer) more productive and as people is more expensive than hardware location transparency will eventually win the battle because of economic pressure.
One thing the EJB spec does make you do is address the problem of location. I remember talking to a middle ware company and they said that their application had started as a CORBA ORB. It evolved to something that allowed synchronous/asynchronous messaging based on message definitions, and lots of adapters for various Databases and Standard APss (SAP, Peoplesoft). The patterns and best practices I've read about EJB lead in the same direction. While value objects are not expressely stated in the spec, they are easy to implement. We wrote our own reflection based method of copying a value object to the related Entity Bean. Thos things probably should move to the spec, eventually.
I'm glad the article was written. I hope that the gripes listed theirin are taken into account as the JCP for the EJB (EIEO!) spec moves forward.
Before this goes too far...:
Its not that I dont believe in OO design and implementation (I dont think I require the Java vs assembler analogy ;-).
>> You just have to wrap the exceptions with business-sense ones
Exactly - I agree. But you still had to catch the RemoteException (in order to throw the business sense one). The exception could have been a timeout, a marshalling error, a socket connection error - none of which are valid for local calls. Moreover you may want to treat each differently - some you can recover from some exceptions. (NB: this is just RMI - nothing to do with EJB).
Therefore.... you cant and dont treat a remote call exactly the same as you do a local call.
>> About location transparency being not practical, I guess
>> what you mean is that it can make your system really slow.
>> And it can, but this seems to follow the line of argument ... <snip>
What I am saying is that is necessary to be pragmatic OO developer. A *pure* OO design/implementation for a *distributed* application will lead to very chatty interfaces and lots of calls across the network.
You can still do an OO design - I am not suggesting that you must abandon OO design, or that you must abandon distributed design - but you must account for the network interfaces in your design.
If you dont, performance and throughput will suffer (a lot!) - and network latency (from chatty applications) is something that you cannot eliminate (without changing the design). This is not a new phenomenon. Lots of early/naive CORBA projects failed in this way.
Therefore.... you cant and dont treat a remote interfaces exactly the same as you do a local interface.
"Its not that I dont believe in OO design and implementation (I dont think I require the Java vs assembler analogy ;-)."
Sorry about that, I didn't mean it that way, please accept my apologies. :-(
Getting back to higher grounds...
"But you still had to catch the RemoteException (in order to throw the business sense one)."
Sure, but I would do it in some sort of façade business object the way (sorry to raise comparisons) a .Net proxy class does. If the business class clients consistenly use this façade, then they would be perpetually unaware of the real business object location.
"... performance and throughput will suffer (a lot!) - and network latency (from chatty applications) is something that you cannot eliminate (without changing the design). This is not a new phenomenon. Lots of early/naive CORBA projects failed in this way. "
I understand your pragmatic reasons but on the flip side, bandwidth has grown a lot (some sort of Moore's law is applying here) so what was unacceptable chatty yesterday will be reasonable wordy tomorrow. For this reason I think that, without being to audacious, we should consistently push our designs to a location-unaware model.
Sure, but I would do it in some sort of façade business
>> object the way (sorry to raise comparisons) a .Net proxy
>> class does. If the business class clients consistenly use
>> this façade, then they would be perpetually unaware of the
>> real business object location.
I agree with you - that is a good approach - the delegate pattern *should* hide all the implementation details from the rest of the code. However, you cant do that in a generic way. When you have non-idempotent calls, and there is a remote error (the example of the *remote* server crashing) - you can have a situation where there is no way to determine whether the particular call/transaction actually completed. The only way to find out if it did complete/commit, is to defer to some business logic - or defer it to a human to check. For synchronous operations in a local process, this never occurs.
>> I understand your pragmatic reasons but on the flip side,
>> bandwidth has grown a lot (some sort of Moore's law is
>> applying here) so what was unacceptable chatty yesterday
>> will be reasonable wordy tomorrow. For this reason I think
>> that, without being to audacious, we should consistently
>> push our designs to a location-unaware model
There is a similar law (maybe its just a saying) that says: "you can buy bandwidth, but latency is here to stay".
While I agree with your point, ..it is not necessarily bandwidth thats the problem. A chatty application *accumulates* latency. Increasing bandwidth will help to a point - but even if your remote call is on the same machine (ie the network is not even involved) then the call is still orders of magnitude slower than a local call.
Good points. A very good read on this is http://research.sun.com/technical-reports/1994/abstract-29.html
"We argue that objects that interact in a distributed system need to be dealt with in ways that are intrinsically different from objects that interact in a single address space. These differences are required because distributed systems require that the programmer be aware of latency, have a different model of memory access, and take into account issues of concurrency and partial failure.
We look at a number of distributed systems that have attempted to paper over the distinction between local and remote objects, and show that such systems fail to support basic requirements of robustness and reliability. These failures have been masked in the past by the small size of the distributed systems that have been built. In the enterprise-wide distributed systems foreseen in the near future, however, such a masking will be impossible.
We conclude by discussing what is required of both systems-level and application-level programmers and designers if one is to take distribution seriously. "
Incidentally, one of the authors (Jim Waldo) later engineered RMI...
>> "We argue that objects that interact in a distributed
>> system need to be dealt with in ways that are
>> intrinsically different from objects that interact in a
>> single address space."
This paper was not particularly insightful back in 1994, and I really don't think it's worthy of being brought back from the dead now.
Transparency is always selective - you can rely on the platform to handle things, or you can code it yourself (the ODP spec. says this in its usual obscure way, but I doubt if Sun people read it).
The consequence of Mr Waldo convincing himself that comms errors were to be treated as being at the same semantic level as 'Customer not found' was of course the requirement for every method of every RMI interface to declare the (checked) RemoteException. Fortunatately, one vendor at least has had the sense to make this optional (BEA).
>> This paper was not particularly insightful back in 1994,
>> and I really don't think it's worthy of being brought back
>> from the dead now.
>> Transparency is always selective - you can rely on the
>> platform to handle things, or you can code it yourself (the
>> ODP spec. says this in its usual obscure way, but I doubt
>> if Sun people read it).
NO platform can make remote calls appear as local calls. It can try to hide some things (as CORBA and DCOM does) - but there are some serious implications that cannot be hidden in a generic way (it eventually requires the developer account for the remoteness - either in business logic, or in design).
There are several powerful arguments in this thread why Local calls and Remote calls are fundamentally different - but I havent seen any convincing counter-arguments that invalidate them.
Again, those arguments are:
1. Remote invocations are more expensive by an order of magnitude, to an extent which is not igonrable, and code must be specifically designed to recognize and handle this.
2. Remote invocations failure modes are inherently different and more complex than local ones. The inovcation itself can be considered opaque in the local case. In the remote case there can be problems which cannot be handled in a generic manner (remote server failure - did the transaction complete?)
3. Argument passing semantics are different to an extent that cannot be hidden by our current technology.
The guys who designed RMI got it right.
The guys who introduced Local Interfaces also got it right.
Not sure why is it necessary to repeat the Waldo paper's position - presumably people can read this for themselves.
Yes, a distributed system is different in character from a local one. I trust that no one is under the misapprehension that this is a novel statement. However, it does not follow that because the overall system will have different characteristics, the business logic needs to be extended to handle additional cases, any more than a program that involves paging needs to be extended to handle disk errors.
Although a comms error is more likely than a disk error, the handling is essentially the same - either wait until the failure goes away (network restored) or abandon the activity. There's no reason to deal with this this failure handling explicitly in every call on every business method - the handling doesn't need to be specialized for every kind of object.
In many systems, transactions fail all the time due to locking conflicts, but the right way to handle this isn't to allow every method to return an explicit LockException - the transactional environment handles the control flow.
Abstractions like these are valuable since they make coding more efficient and programs more reliable. There is no virtue in exposing lower-level semantics unless custom logic will add capabilities to the overall system.
In case it wasn't clear the first time, the 'counter-argument' is simply that transparency can be provided
I trust that you are not under the misapprehension that there is anything novel in these statements. Location transparency is a topic that has been investigated for quite a while, starting with early RPC systems like Xerox Courier and Apollo's NCS in the mid-80s, and continuing through distributed database experiments and the 'semantic web'.
Naturally, a variety of programming models have been tried. As you say, fundamentally a distributed system does have different characteristics from a local one. However, there are different, valid choices about how
In case it wasn't clear the first time, the 'counter-
>> argument' is simply that transparency can be provided
I was just looking for some details on how this can be achieved..
Well, a few details sprinkled around there, looks like they got nicely chewed up in my browser form. I think it's time I upgraded Mozilla again...
For the keen, there's stuff in a similar vein with other original ODP and related papers in the ANSA archive: http://www.ansa.co.uk
I think that the main point of the paper was that the design needs to be aware of the distribution.
Of course, the solution might then look transparent.
I don't think you can take any design whatsoever and just convert it transparently into a distributed application.
Foregoing points like non-repeatability of actions(infamous "oops - the antenae now points towards Andromeda, not Earth, what do we do?"), com failures, lag times etc., memory space difference is a major thing.
If I do something in my method that modifies state of the object that is accessed by both sides, and not passed as a parameter/returned (for example, a singleton), making the method distributed implies that the object I deal with must become distributed as well.
If the object happens to be a cache that is very frequently accessed, you have just (potentially) introduced a performance problem.
From some point of view, the system will function as normal - i.e. do the things it was intended to do.
From the requirements point of view, the system could be a failure, because it doesn't meet its non functional (i.e. performance) requirement.
This is an answer to Sartoris' post:
Let's define two objects to have the same interface iff instances of each can be used equivalently by the same code. The question is, can local and remote object really have this quality? RMI designers said no, mainly for these reasons:
1. Remote invocations are more expensive by an order of magnitude, to an extent which is not igonrable, and code must be specifically designed to recognize and handle this.
2. Remote invocations failure modes are inherently different and more complex than local ones. The inovcation itself can be considered opaque in the local case. In the remote case there can be problems during argument passing, during the request processing and during the returning of a value. Code must be designed to deal with these failure modes.
3. Argument passing semantics are different to an extent that cannot be hidden by our current technology.
History shows that every single system that tried to ignore the problems above failed to create an effective and scalable model. Anyway, I don't think this controversy is within EJB's scope. RMI and CORBA are the basic distributed model used by EJB, and both define remoteness to be a part of the interface.
Gal, what you say is quite true in part with respect to the true intent of java Beans.
I think, somewhere, the vendor community got carried away in their ambition, driven mostly by worldly guile of course and the filthy lucre, to embellish what was inherently not the intrinsic nature of Java Beans. Now, EJBs conceptually are cool, and I might even say it levels the playing field, so to speak, as far as one man companies like me are concerned. I can truly then develop some "cool" beans and market them to the Fortune big guns, with all development done in my garage. Alas, the truth is something else though.
True EJBs require a lot of factors to consider in a high volume business processing environment. Did anybody consider how many of the Fortune 1000 companies are running their "bread and butter" applications on the so-called "legacy" OS/390 systems of yore? Why? Because...
Think about it... (;-)
Oh, it's easy. If you don't like them, don't use them. Go for JDO, Jini, ....
There is lot of true in this article. We write quite big application using EJB 2.0. It is really not perfect. Especially CMP seems to be useless in complicated application. EJB QL languge is static (can't be dynamliclly generated at runtime) so I even can't do "orderby" without creating new "static" query (each ordered column should have it's own query). It was unacceptable in our application so we had to create BMP.BMP must do select for each loaded bean and has terrible performance.
QL lacks of many requied features like aggregators.
Lack of metadata is also big issue if you want some client side validation(data type or string length). Again you must get it from differnt source (special configutation bean - which must be maintained or JDBC metadata which kills db columns to field names transparency and XML CMP storage). Entity Beans inheritnace would be also appreciated for me.
EJB has impressive tool,component, documentation, and design pattern support. This massively reduces the risk of my projects.
I guess those nasty conceptual and performance issues will be worked out eventually, as there are a lot of companies whose profits will depend on making EJB a feasible choice.
I am sure given a large amount of time and money, better performing custom solutions can be developed, but right now I can develop software which fulfills the performace and scalability requirements with a relative ease. I can train my programmers on it, I can give them ample documentation, and I can give them tools to use it efficiently.
why don't you use session beans and JDBC for views/read onlu operations (Order etc.) ? Why don't you us XML schemas for client side validation ? You can easily generate validators for clients and servers from XML schemata.
I don't think location transparency is a desired feature in mission critical applications. When you make a method call, you would want to know whether the target object is remote or local. Most of the technologies that support distributed architecure talk about quality of service (QoS). If a method invoked on a remote object has latencies associated with network communication, I reckon the client objects should be aware about that.
Yes, I guess it is an interesting article. However, I have read through maybe a third of it and I find myself scratching my head at most of the points. There are some vaild points - most have which have been made hundreds of times before. Most, though, seem a little of base or just plain don't make sense:
"EJB represents a radical departure from the Beans model...Enterprise Beans don't have property change events or vetoable events, or indexed properties."
JavaBeans and EJBs serve to entirely different purposes. Extending one to the other does't make sense. Binding lightweight JavaBeans to application events make sense. Binding EJBs to (possibly remote) events does not. Also, EJB does not address indexed properties because EJBs don't HAVE properties. If you want to access an indexed attribute of an EJB (either a POJ or a composite EJB), just add a method to do so: LineItem Order.getLineItem(index i)
"The EJB spec doesn't address access transparency."
"The EJB spec doesn't address failure transparency."
No, that's what the vendor's container do.
"The EJB spec doesn't address performance transparency."
Performance transparency. That doesn't even make sense. You always have to consider performance when you develop applications - especially distributed appications. A spec isn't going to solve all this for you. An easy rule of thumb: make distributed calls as infrequently as possible.
"The only justification for EJB is on large projects."
How can you justify this statement? Open-source solutions, the increase in tools and a much improved spec make EJBs viable for almost any size project.
"There is no concept of whether the primary key is generated externally by the application"
Exactly. That is left up to the developer. If you are porting a legacy app to EJB and the database is gen'ing the PKs, you can keep doing it this way.
"No standards for writing Session beans as Beans."
Huh? Not even sure what that means.
I good go on...These are just a few that jumped out at me.
I have to agree with a few of the points, but like someone else said, no technology is perfect, but it's the best out there. So just design&code around its flaws already! But here are a couple I liked:
"Sun engineers say don't use entity beans. Most 'fast' EJB patterns seem to involve using sessions that go straight to the metal. Half the time is spent navigating, so half of the system needs to be re-written. Entity beans useful, in single entity transactions? This means that EJB is only half complete."
I'm not sure I'd say that ejb is 1/2 complete, but the rest of that paragraph I agree with. Sun spent all this effort coming up with all this stuff, then they tell you not to use it.
"Most web apps are not very transactional. EJB comes from a TP/MTS type background. Perhaps it should have been called TJB, Transactional Java Beans, so that people knew where it belonged. As it is, EJB co-opts areas that simply don't belong to it. It's a baroque framework for transactional Java beans, which tries to wear the clothes of the enterprise emperor.Conversely, EJB is very transactional - it sucks when it comes to high performance queries, read-only work."
Absolutely true. I had to deal with that in my apps, but it really wasn't a big deal.
"It's difficult for a bean to tell the server when it is dirty, or read-only. "
Yes, it is; that's why the darned container is constantly calling ejb_store to keep things in synch. Again, I added a dirty flag to my value objects.
BTW, for those who said "I didn't read them all, there were 101 after all" are wrong, they only posted the first 34 of these points, the next are coming later.
"Especially CMP seems to be useless in complicated application. "
Definitely. We don't even use it here.
Some of the problems described here are real. For instance
the thing that really affected me was locking - there's no
elegant mechanism to do custom locking in EJB (there are solutions,
like storing session handles in a db, or using JMS, but ...
one shouldn't need to do anything like that for a lousy lock).
The reasonable way of making your application safe is to
use serializable transactions, but that kills performance ...
I don't believe these guys are the J2EE "evangelists" they claim to be, they sound more like RAD types, looking for a drag-and-drop way of developing middleware.
EJBs and JavaBeans are worlds apart in design and target, the name Enterprise JavaBeans is surely misleading.
Oh, this makes me sick, alright. I would call these guys approach "nostalgia" if I thought they were earnest.
Conform to ISO ODP? They must be kidding...
Remember the OSI 7-layer approach to networking? TCP/IP completely crushed it.
Remember DCE? No? It's alright. How many CORBA implementations can you count from the top of your head? One? Two? Ten?
ODP was meant to "enable the interoperability between CORBA and DCE objects and vice versa". What a crock to base a new technology on!
Where's that quote from? I can't imagine how ODP would position itself as an interoperability solution when it's a conceptual model. Anyway, ODP was around before CORBA - that's why CORBA was able to benefit from some of the theory (I should know because I put some of it in there [for CORBA 2])!
As far as I know, ODP has virtually no relationship to OSI, apart from being an ISO spec.
Ok, that quote is not directly from ODP specs. It's from Vogel's "bridges" paper. The ODP Trader was surely modelled with middleware interoperability concerns, to bridge the gap as the distributed processing camp moved to CORBA.
That was the exact point I was trying to make with the OSI network model: committee based standards either involve too much overhead for implementation or an early implementor takes the world by the nose and imposes an "industry-standard".
That's what happened to ODP. Oh it still kicks, but kicking is not always a sign of life.
AS this is turning into a whats' wrong with EJBs thread - my 2c worth.
Entity beans were an attempt to get a transparent persistence implementation in an relational/SQL92 world. The barely compatible OO and relational views is one of the major unresolved issues in modern computing. The EJB specification writers were required to deal with the reality of almost all corporate data in relational databases. Inevitably, comprises were made and entity beans are bit of a lowest common dominator solution.
If you read the EJB specification the intention is clearly that EJBs be large grained, unfortunately the fine grained nature of the relational model means we ended up with fine-grained entity beans. There are ways to solve this granularity mis-match using entity and session beans in combination but they don't seem to be well understood.
A second point stemming from the need to use relational databases is that we are constrained by their transactional model. Despite what others have said in this thread, SQL92 only supports pessimistic locking. Yet distributed systems require optimistic locking (it scales much better).
IMO the only way out of these problems (and others) is dedicated OO persistence stores that may expose relational interfaces. I am not aware of anyone building one and OO databases seemed to have died. A variant on this which I think is much more likely to eventuate, is components with dedicated and protected data stored where-ever.
Interestingly .NET has by-passed these problems and gone for a Servlet type model. I don't think this is the right answer, but it seems we have a ways to go before we resolve the issues in middle-tier and relational tier computing.
Ah, the 'your beans are too fine grained' mantra. Not very helpful, IMHO, because the original implementor has goals of flexibility and simplicity that these scholars of umpteen 'pattern' sophistries are so happy to consign to oblivion. There is a reason why the developer is using OO, after all, otherwise he might as well put JDBC calls in servlets.
I think the problem would be better stated as being one of an overly tight coupling of access activity between EJB methods in a transaction and the database. The spec. makes it very difficult, but I recommend looking at TOPLink to see how things can be improved, both using their intelligent EJB container and/or by using normal Java classes directly with their persistence mechanism - that really can make life simpler.
I have worked with Tuxedo and C++ before I worked with EJBs and I can tell you how amzingly simple EJBs are to work with as compared with the earlier app server technologies.
IMHO, EJBs are not for everybody's consumption. You should use it only when you have need for its special features like transaction mgmt, automatic persistence and portability.
Most of the points in the article miss the above truth. For example, Beans were designed for an entirely different application area (GUIs) while EJBs are designed for a component based server side application.
However, I do accept that EJBs are not perfect yet. In particular, in my experience, I have found local interface for entity beans are nearly always used along with session beans and remote interfaces for entity beans are rarely used. The specs should make it the default option.
Likewise, automatic key generation is something nice to have...
But lets not crib about the usefulness of EJBs. As the saying goes, something is better than nothing
you want to change something?
The next set of "damnations" is on the site now - Development and Deployment issues:
Thanks to everyone for their feedback so far.
There are a lot of valid points in this article. EJB/J2EE is far from perfect. Yet it is the best enterprise development framework out there, considering all the alternatives. And it's a framework that's still evolving and improving.
Ultimately, the usefulness of a tool lies in the hands of its operators. Is it a coincident that this article originates from "bad-managers.com"? I think not! ;-)
Just one of the gross misunderstandings in this article..
"The logic of basing a system on saving a resource as cheap as memory seems absurd."
If this logic seems absurd to you then you don't really understand why it's being done. The benefit of object pooling is not to reduce the total amount of memory used. Even though memory gets cheaper all the time, the relative cost of *allocating* memory remains constant and high. One of the benefits of object pooling is that it saves you this cost. Another benefit is that the total amount of memory is bounded. No matter how many dimms or simms you buy, their size remains finite. Object pooling ensures that the absolute limits of memory will always be respected, no matter how much you have to waste.
Actually, this is true only for session beans and entity beans with a small number of attributes and/or the attributes being primitive types.
If your object contains 10 objects, these take time to allocate as well - about the same as your main object. (yes, you don't allocate them at the start, but you have to eventually).
So if you don't allocate your main object, all you save is about 1/11 of the whole time (actually it's a bit more but still not a significant saving in allocation) - GC also still has to collect all 11 objects (some of them might not be fully aggregated so it has to check them all) etc..
Then, when you "passivate" your object, all of this get stored somewhere (serialized, mostly), and "activating" it allocates it again. The difference in whether you serialize it all or in parts is minimal.
It makes a perfect sense for session beans though, as most of them have no attributes at all or a very limited set.
Is object pooling at the application server level still relevant? Are modern VMs not already implementing object pooling "natively"? That's just a question, I can be misinformed... but this sounds very natural to me.
The same can be discussed about JDBC connection pooling. Some JDBC vendors implement connection pooling at the JDBC driver level, directly. Again, is it the role of an application server to provide JDBC connection pooling?
We can even go further. Are we sure that an application server implementing pooling will actually not degrade the performances/scalability/memory usage of a development that is: a) run using a VM that implements object pooling; b) uses a JDBC driver that implements connection pooling?
Some application servers features were relevant when they have been thought out (around '97 for EJB-based app. servers). Are they still relevant, systematically, nowadays... Again, this is a question.
Yes, object pooling is still relevant. Consider the second part of my argument above--any resource pool places a finite limit on the amount of that resource that gets used. The JVM only does this with the heap. Pooling EJBs (or anything else) ensures that you will never exceed the limits of your system. Even if you try, you will be turned down politely and not bring the entire server down. Conservation of resources is one of the basic requirements for scalability.
As for JDBC vendor connection pooling, why would you want to take your chances with whatever pooling scheme the JDBC vendor uses? Wouldn't you rather take advantage of one consistent pooling mechanism that has been beat to death over thousands of deployments and will behave the same regardless of the JDBC drivers you choose?
Instance pooling won't stop you from creating X instances of your own classes - of course, unless you make everything an ejb, which would be quite wastefull on its own
Scott Shaw said:
"As for JDBC vendor connection pooling, why would you want
to take your chances with whatever pooling scheme the JDBC vendor uses? Wouldn't you rather take advantage of one consistent pooling mechanism that has been beat to death over thousands of deployments and will behave the same regardless of the JDBC drivers you choose? "
? what makes you think the app server vendor will provide better connection pooling than the JDBC vendor?
Our (bitter) experience was that the connection pooling provided by our (expensive) app server was unworkable crap.
It worked fine with the app server's native type 2 driver for Oracle - but not with anything else.
How the hell do I read this thread? No link.
i think significantly for ejb u need technical project management. Why - I am explaining here
1.When I began with ejb I loved the spec. But the operative word here is that i came from a C++ background and also exposed to internet kind of development and if u had been muddling around with semaphores and mutexes while solving a biz problem - sure it helps u and u are impressed and u are reasonably fast at it.
2. But EJB i find is not easy to grasp for people with say only VB background. I think most appvendors need to do a bit of recruitment consulting. Ultimately a technology is only as good if the projects make it and projects wont make it if the team is wrong. Bottomline is that to suceed u need c++ / java combination in a biz internet env for both .NET and J2ee
If u try too mush of asp to jsp , vb to java etc etc make sure everyone in team has had to deal with the same learning curve else u will fail more likely because of culture clashes etc
3. MoreoverI think to suceed in a ejb project u need a small group of moderately experienced people
You are going to be in for a rough ride otherwise coz very few youngsters nowadays seem to understand the concept of say a makefiles , have not significantly worked with databases
etc etcxMost importantly inexperienced people become IDE slaves so if u havenot got the right IDE you are just stumped. If u are experienced u will instinctively take a short cut like straight jdbc etc where a less experienced guy will get caught up in ejb-ql
4 For a small biz vis a vis Microsoft this is a very high cost of adoption. First its significantly easier to get something up and running with VB ADO COM MTS and who cares about object orientation. Or u go for customised buys side sell side apps and those who work in them charge u an arm and a leg
5. Then there is the fact that u need html programmers , jsp guys , ejb guys and u got to keep them talking and making them understand each other constantly.Java competence does not equal ejb or jsp competence does it? And EAI guys just hate this nouveau think smart ejb guys and with good reason
6. Finally the content side is getting complex. But your ejb guy wont move from ejb coz it hurts his future marketability. so he starts spending inordinate time in custom tag libraries which your html guy refuses to work with and anyway if u are a small biz but u want to start serving diff kinds of content wap voice etc- well just go kill yourself for even html guys dont want to get generalised to wml for they want to start contracting
7.There was a time not so much in the past. Us had export regulations for 128 bit encryption etc and a bunch of C++ contractors 8 in all wrote a complete bundled solution aka single sign on , jms , pki u name it. yep content was not separated - our webserver just served up customised pages. yep it was not exactly the pinnacle of design patterns. But significantly enough it worked and was fast enough.And it didnt need half the project management competence a similar projectwould need now.The client who at that point of time was pretty moronic got our code started a shop , paid his bills , made his profit and then has recently upgraded to a complete bundled solution from a well known vendor since he was expecting lots more traffic etc. Now he had to get his own programmers etc etc etc and having gone thru with it
he is in a pretty threatening depressive suicidal livid state all at the same time
8. i hope jini/brazil/rio etc gets here fast enough before all clients go the same way.
Just in case anyone thinks it is morbid to talk VB COM
etc in same breadth as EJB
I dont work for Bill Gates but try this experiment
with Linux , Solaris NT and 2000
use an udp packet generator the kind freely available
use a tcp packet generator freely available
now use ethereal or some packet sniffer again open source etc
now start on a single box and watch when udp packets get lost under the barrage or the kernel does'nt pick em up
do u see the difference between nt and 2000
Guys, guys, guys, guys!!!
Please just tell me in plain simple English when to use EJBs (2.0) and when NOT to use them! This thread has gotten so verbose and complex...shessh!!!
Yes, I am new to EJBs, but I'm also the kind of person that learns from the MISTAKES of others as I don't believe in re-inventing the wheel.
So, are EJBs usefull at this particular time? Or, is their a better alternative?? Quit bitching over other un-productive sidebars and provide SOLUTIONS to the alleged minetraps.
Is it simply approaching one's design differently?
Or, is the specs phucked up and still needs more work since it seems like PERFORMANCE is a really big issue???
If so, then what are the "work arounds" for Scenario-ABC?
Next, what are the "work arounds" for Scenario-XYZ??
Is this so hard guys??
And thanks for all your previous posts above!!! :-)
But, sometimes, you have to say, "fine, what are the most glaring problems and show me the solutions."
"Please just tell me in plain simple English when to use EJBs (2.0) and when NOT to use them! This thread has gotten so verbose and complex...shessh!!! "
It is interesting that vendors does not provide a definite answer to this critical issue.
Take a look at the Sun's blueprints J2EE Application Scenarios. You will find that all models are allowed on J2EE. Even browser > servlet > database, and, of course, Data Access Objects.
In this way, we have a too-open scenario, with no real criterium to choose an approach in a particular project.
Extracted from sun blueprints:
"There are numerous examples that one could concoct where an EJB server (at least initially) could be deemed to be an overkill given the problem being tackled. This is the sledge-hammer-to-crack-a-nut problem. In essence, the J2EE specification does not mandate a 2, 3, or multitier application model, nor realistically could it do so. The point is that it is important to use appropriate tools for a given problem space."
You can find many opinions, comming from frustation:
* Use EJB in big projects, not in small ones
* Use EJB when you need all the features of the spec (tipically, never)
* Use EJB when performnace is-not-an-issue ;-)
* Never use EJBs
I think that J2EE in general and EJB in particular provides a solid framework to build applications.
Of course, the spec is not perfect, but at least there is one!
When to use EJBs?
It is possible to buid scalable applicationswinth J2EE, with EJBs and without, but always with the right (experienced) team.
IMHO, if you are in doubt, do not use it, at least in a critical project.
Well, I have read through - albeit very quickly - the 101 Damnations. I must admit, they have some good points. I guess for me the HUGE problem with EJB is performance. In particular Damnation 79:
"79. EJB performance is very slow, and uses lots of resources. It's costly to scale. The "fetch by primary key" query model suggests that the designers did not understand how relational databases work:
The EJB queries return a list of primary keys. Each of these primary keys are then used to perform another query to the database. This operates in a way contrary to the way relational databases are optimised, i.e. on sets of rows. The whole philosophy behind EJB object pools is to re-use objects, and not cache them. This relates to Damnation #3 - that EJB is designed to preserve memory, when memory is not a scarce resource - in fact it's a dime-a-bucket at the moment."
This part of the EJB spec completely blows me away. This means that EJB entity beans are pretty much useless for representing data searches, UNLESS I am missing something.
As a Java designer/developer and Oracle DBA, I am at a loss for words about this issue. This issue severely stunts the usage and relevance of EJB entity beans. Can someone explain this to me why this is so? Perhaps even tell me of a workaround?
You can either write your response here or to alexjamesday at hotmail dot com
I would appreciate your opinions and advice,
The important thing people tend to overlook is:
EJB's weren't designed with massive processing in mind - particularly for running finders that returns hundreds or even thousands of entries.
As for why primary keys:
- if you have only a few entities, chances are that the entity is already cached in the pool.
- it's easier to do distributed stuff with sending around just the key (rememeber, machine that runs your query might not be machine making the request..).
From practical point of view, I think it is overengineering, as a vast majority of users won't meet any of these conditions (i.e. only a few entities, or need for use of ejb distribution)
As for what to do - if you have to use BMP, try to use your entity beans only for dealing with clients (i.e. no batch jobs). Or use a good CMP engine, they are allowed to do in container optimizations (such as actually fetching everything, not jus PK, and then instantiating as needed).
Or of course, don't use entity beans at all :) (IMNSHO almost a must in most of the real enterprise applications if you don't have a good CMP).
How to resolve CORBA Marshalling error? How to find out where it occurs? Any tools to debug/log? I'm using Borland App. Server and JRun.
I get the rough-and-ready idea that you'd use entity beans mainly where you are fetching a row with the intention of updating it, so should have the primary key handy. Anything else would use a session bean.
Or is that too painfully simplistic (likely yes, but I'm just getting into this EJB melarkey and any and all advice is greatly appreciated !)
I'm getting a bit of a "Listen to me now. And hear me later." disconnect from this thread.
I am steering a project that appears to be successful enough for scalability to be a concern. It uses a home grown XML/XSLT container slash business rule engine persisted to an Oracle DB. I want to distribute the compute burden and was wondering if we should use a minimal entity bean EJB model for our transactional layer or should we grow our own (probably using RMI, SOAP, Web Services)?
From this thread I get the idea that, yes we could use EJB for a thin transactional layer (listen to me now). "Thin" being the optional word here, since then we would avoid most the design/development/deployment issues.
However, what does this buy us, except tie in to an expensive server product (hear me later)? The team would love to gain EJB experience, but is the price to high?