Tech Talk with John Crupi on J2EE Patterns and eBay Case Study

Discussions

News: Tech Talk with John Crupi on J2EE Patterns and eBay Case Study

  1. In this interview, John talks about the new patterns in his book 'Core J2EE Patterns'(2nd ed); he discusses the Domain Store and the Business Object, and looks at how metadata will change the way we write patterns. He describes how his team rearchitected eBay 3.0, performance requirements that drove eBay's design, and why eBay chose J2EE over .NET for their architecture.

    Watch John Crupi's Interview

    Threaded Messages (81)

  2. Scary Movie[ Go to top ]

    John says that he has spent the last two years working with Ford, Verizon, and E-Bay. He concludes that 'a lot of developers' don't even use domain models in their applications. Does anyone else find that frightening?

    And what happened to Dion's Exploding-Passenger-Plane 'humor' story?
  3. Scary Movie[ Go to top ]

    I find it difficult to believe, unless 'lots of developers' develop without using proper OOAD process, or maybe it is not worth it.
  4. Scary Movie[ Go to top ]

    In my own travails I've found it quite common. People talk a good talk on forums such as this - but not everyone in these forums describes their own projects, ah, accurately, and then there are all the people who don't post to any forums anyway.

    Most developers I've worked with have had a significant problem dealing with abstraction and seperating functionality out into layers. Their primary imperative is "make it work".

        -Mike
  5. Scary Movie[ Go to top ]

    Most developers I've worked with have had a significant problem dealing with abstraction and seperating functionality out into layers. Their primary imperative is "make it work".

    This is my experience as well. There are very few people out there who know how to do things right and even less who do, in fact, do things right. Surprisingly enough, OO is still not widely practiced and the waterfall method still rules. Ugh!

    Bill Willis
    PatternsCentral
  6. Scary Movie[ Go to top ]

    I have to admit that I have experienced the same thing. I recently moved over from java/j2ee over to c#/.net. I am building a rich domain model in combination with metadata mapper patterns. Hence, there is a strict separation of tiers. I have spoken to many of my friends and they all seem to say, just make it work. I find that terrible scary. We don't want to get to ahead of ourselves in our application designs, meaning that we do want a vertical slice of the application up and running as a proof of concept. However, that should in no way sacrifice a solid foundation to work with.
  7. Scary Movie[ Go to top ]

    My experience from a couple of very large, high-profile consultancies is that a high-level "good" architecture is usually planned at an early stage, but as soon as any actual code starts to be written it goes into "lets just make it work"-mode, and any kind of follow up on architecture or code quality is completely omitted as long as there are no major showstopper problems.

    The architecture "planning" at the beginning is very much a play for the galleries: its just to show the customer and superiors that there is a good plan, after that it very much goes into "lets hack something that works".

    Water fall method all across the board.

    The projects I have seen that have been successes from an architectural point of view have usually been because there has been a competent and demanding customer that has done rigurous follow-ups, basically kept the project team on its toes: "I might get accountable for what I am doing".

    Even if there are one or two people on the project that would like to mandate good architecture and pattern use, its mostly a lost cause if there are 4 times as many people who just want to "get it to work".
  8. Scary Movie[ Go to top ]

    Hi all !

    > My experience from a couple of very large, high-profile consultancies is that a high-level "good" architecture is usually planned at an early stage...

    I have seen many so-called-architects just quote industry standards... Choosing J2EE and struts is not what I call designing an architure. But the term itself (architecture) is often discussed !
    I tend to think that designing an architecture has one goal : support the design you intend to put at work. I can't architect a system if I have no clue of the kind of design I want.
    This is my understanding : architecture is the basis for design and implementation.

    > plan, after that it very much goes into "lets hack something that works".

    I believe an architect has to make sure the architecture is respected !
    Architecture serves as the technical basis of the whole project. You have to be very self confident to be an architect...

    > The projects I have seen that have been successes from an architectural point of view have usually been because there has been a competent and demanding customer that has done rigurous follow-ups, basically kept the project team on its toes: "I might get accountable for what I am doing".

    The projects I have seen that have been successes from an architectural point of view have usually been because there has been a competent and demanding architect...
    My customers most often just want the system to work, no matter what...
    Dev team members learn that good architecture, good design and good implementation make our job fun and pleasant while the just-make-it-work-approach most often turns into "debug this shit".

    > Even if there are one or two people on the project that would like to mandate good architecture and pattern use, its mostly a lost cause if there are 4 times as many people who just want to "get it to work".

    Change managers ! It's a management decision to make shit or make the effort to do things the right way. If your management can't understand that, it's time to change (management or company :-)

    I love my job
    I hate debugging and 250+ LOC classes...

    Bye
    Chris
  9. Scary Movie[ Go to top ]

    I believe an architect has to make sure the architecture is respected !

    >Architecture serves as the technical basis of the whole project. You have to be >very self confident to be an architect...

    The problem with the architect role in many big firms is that they are simply there at the beginning, then they get shipped of to the next project that is in its early phases. PERHAPS coming back at later stages IF things have gone badly overboard.
    The problem with being an architect is that you have absolutely no possibility to be there and make sure architecture is respected, because you are at a completely different project by the time it is needed.

    Its not down to "bad" architects, its down to politics: features and deliverables are followed up closely by management, but the architecture of said things is only followed up if things go awry..

    Making management understand that wont be an easy task: try changing the culture in an company with tens of thousands of employees.
    You might have the influence to change 3-4 people at a time, but thats not even close enough in a culture that is very settled in.

    The problem in it self is not down to bad architects or bad developers, its a cultural and political issue where management doesnt know and/or care about the implications and hidden costs in bad architecture and code.

    Try changing that in a company with ten levels of hierarchy when you have 5-6 managers above you..
  10. Scary Movie[ Go to top ]

    I'm the Techie Lead of two projects, one of them is a J2EE project running 24x7 worldwide, and the other one is a Windows DNA project (no comments...).

    I agree 100% with the fact that the goal is 'make it work'. But... how? When I joined the project two years ago, I decided to refactor the existing application, apply good practices, and play the role of the 'best practices watchdog'. What I found was that nobody cares about iterative development, very few people know about good OO design, and finally, most of the projects trend to 'code and fix'.
    But here is my point: all the good work made in the beginning of the project is making it succesful, because the baseline is robust. I think that sometimes is necessary to relax the quality control of our work in order to get something on time and running. But as soon as time and resources are available, it is necessary to review again all changes made, and modify what could be harmful.

    Also, culture and politics in the companies have a lot influence in our work. Usually, technical leads are the weakest leaders of the projects. I cannot imagine it in some other engineerings, but this is the shocking truth of IS.

    My two cents...
  11. Scary Movie[ Go to top ]

    You know - there are scarier sides to these issues.

    It's amazing how many people want to jump on the XP bandwagon, and adopt "lightweight practices" without truly understanding the holistic and complementary nature of XP tenets and practices.

    Sometimes I feel that people adopt XP because they hate writing documentation (wry expression).

    It's also funny to see how so many developers abuse YAGNI and "test-driven development" ideas. Many projects I've seen recently exhibit exactly the characteristics described in this thread, where the lack of a decent design was almost being extolled as a *virtue* at the altar of simplisticity.

    I am beginning to wonder if the total misuse of a process is worse than having no process at all.

    Sigh.

    Sandeep

    > I'm the Techie Lead of two projets, one of them is a J2EE project running 24x7 worldwide, and the other one is a Windows DNA project (no comments...).
    >
    > I agree 100% with the fact that the goal is 'make it work'. But... how? When I joined the project two years ago, I decided to refactor the existing application, apply good practices, and play the role of the 'best practices watchdog'. What I found was that nobody cares about iterative development, very few people know about good OO design, and finally, most of the projects trend to 'code and fix'.
    > But here is my point: all the good work made in the beginning of the project is making it succesful, because the baseline is robust. I think that sometimes is necessary to relax the quality control of our work in order to get something on time and running. But as soon as time and resources are available, it is necessary to review again all changes made, and modify what could be harmful.
    >
    > Also, culture and politics in the companies have a lot influence in our work. Usually, technical leads are the weakest leaders of the projects. I cannot imagine it in some other engineerings, but this is the shocking truth of IS.
    >
    > My two cents...
  12. Scary Movie[ Go to top ]

    I think XP is so attractive to management because they translate 'lightweight' to 'cheaper and faster'. And this is not true at all. I tried to make my development team to start with JUnit and the 'test infected' approach. They said, 'so we need to write twice the code we are writing now?!?!'. Only some of them understood the philosophy.

    With my managers we had a lot of fun with XP. My VP said. Yes, we are going to start doing XP right now! Ok. So I asked for a redistribution of the tables, to start with the 'pair programming'. Answer: no,no... we are going to apply XP, except for pair programming. We are not going to pay twice what a developer can do alone...

    So, what is eXtremme Programming without pair-programming and testing ? Just code fast and furious in order to get a weekly release...

    I know that we, spaniards, are not very good following processes, protocols and rules... but my boss is American, boy. So mix 'relaxed processes' and the 'American way' (now, now, now), and you will get a worldwide application 24x7. Please, don't ask me how... Sometimes I wonder how can such a piece of crap can run so smoothly in Weblogic+Oracle... but it does, and pretty smoothly.
  13. architecture and design approach[ Go to top ]

    It is certainly a scary world out there!

    As a solution architect, who gets bounced into clients at the start, and sometimes pulled out to start the next, I can understand many people's frustrations. However, at the end of the day, we all work in a commercial world, with our clients paying our wages.

    I think there are two parallel approaches to applying a rigorous approach to architecture and design:
    a) explain the process to the client, and to the people on your project. If you can influence, and convince of the benefits of a sound architectureal approach (e.g. lower cost of maintainence, reduced timescales to implement additional functionality), you will find many people buy in. This is crucial in starting the vision, then you have to see it through.

    b) aid the developers in their understanding of the architecture you wish to implement. It is so common for someone to say function X, pattern Y, etc, but to sit down and educate produces far better results, and also encourages feedback on the design.

    At the end of the day, we all think we know best, but there is always someone out there that knows more!

    10 cents over.

    David
  14. architecture and design approach[ Go to top ]

    I think you are spot on: It is about education, educating those around you about the hidden costs (need to change something? Oops!) of not doing things right.

    I think the most important step is to try to influence the developer next to you so he/she understands the benefits. In my opinion the most important quality when looking for new developers isnt necessarily a long list of skills and experience: its a burning desire to learn new things and deliver quality results. You can bet your IDE on a person like that picking up the skills needed and more in no time.

    Although there are still two main problems, especially in larger organizations: changing the culture, its doable to influence 4 or 5 people, 4-5 000 people is a different thing altogether.. But I think we actually might have something to learn here: just as David said, appeal to the organizations wallet, try to set some measurable numbers on what bad quality actually costs, I´m sure most people can relate to a project or two that went bad..

    The second is trying to influence "the developer next to you" that only sees his job as a 9-5 get the paycheck thing, sadly these people exist in large amounts in our profession as in any others.

    David: "At the end of the day, we all think we know best, but there is always someone out there that knows more!"
    Amen to that! I´d like to add yet another valuable cliché to that, courtesy of James C. Collins "built to last": Good enough never is.
  15. When process is lost..[ Go to top ]


    > I am beginning to wonder if the total misuse of a process is worse than having no process at all.
    >

    "When the process is lost, there is good practice.
    When good practice is lost, there are rules.
    When rules are lost, there is ritual.
    Ritual is the beginning of chaos."

    The Tao of the Software Architect

    Remember, process is a people thing. That's why "processes" are so often misused - you just take the rules and try to play by them while not understanding the process. And then you're lost.

    //ras
  16. Scary Movie[ Go to top ]

    The problem is too many clients and companies can't see the benefits of spending time and money on these types of efforts. Yet when projects fail, leaving out these efforts tend to be large contributors to a project's demise. If you can play such a role as you describe on a project, consider yourself lucky. Hopefully in time, the development community can better convince the management community that we are suggesting real improvements and not just extending a timeline.
  17. Zealous architects[ Go to top ]

    I can say that I benefitted greatly from working on one of my first true enterprise application projects because we had an architect who not only was self-confident in his design, but educated the team when necessary so that we all were on the same page. Such an experience helped me gain focus for future projects and I try to provide the same example in all projects that I lead.

    I was lucky. There can be times where architects kill a project by being over-zealous. It takes a brilliant, yet practical mind to handle high-profile clients and deliver a software system that reflects his (and his team's) efforts.
  18. anti pattern people[ Go to top ]

    Oh so true.

    But by the same token you need these blind pragmatists to offset ones blind utopic view of software development to come up with a blend of JFDI and theory that is at the right level for the project in hand.

    The bottom line is that it easier for a developer to hack together a JSP that is bloated with Java, and performs the functionality of model view and controller that it is to stop and developer a project specific view of the standard patterns around and come up with an elegant solution.

    And it's even more difficult to persuade them of that when their line manager is screaming at them for delivering and daring to stop to think. This is the way it is.

    By the same token you cannot stop to theorise on the entire project architecture as you'll never get anything done. It's a bit like waterfall architecture.

    Us utopics have made things worse for ourselves by creating a pattern happy fraternity.

    Think about the well established patterns that were defined in the Gof book. These are true patterns in every sense. They are proven designs based on fact and usage.

    Then think about all those people out there who come up with an idea and slap a pattern tag on it. This site is also culpable of this, with their patterns link in the head banner. These 'patterns' that people are submitting are in some cases as valid as any GoF pattern, but oh so many are just a figments of imaginations, or in some cases just plain anti-patterns or sometimes just discussions about ideas.

    Most haven't earned such a prestigous title as 'pattern'. Perhaps we should introduce a pattern langauage and have terms like figment or if someone has actually taken time to think about it, call it a conjecture and so on.

    Take Mark Grands book, it kindly took GoF to the Java world but then went pattern happy mad by adding a whole bunch of abscure and arcane design ideas in this and his second book and had the audacity to call them patterns.

    Don't get me wrong, I whole heartedly support that link at the top of the page, but I just think calling them patterns is devalueing the true patterns that are out there and make the blind pragmatists blown away by the 16 different ways he can design his JSP architecture such that now it's even easier just to hack the model view and controller into the the JSP he's working on, get it done, make the manager happy, and then leave before someone has to maintain his mess.

    We also have to remeber that not all IT people are zealots. quite a lot of them actually like their life out of work and really can't be bothered about whether what they are doing is elegant, they just want to finish it and go home. That's the way it is.

    So who decides when a figment becomes a conjecture and then becomes a pattern?

    Don't ask me, I've got a deadline to meet.
  19. anti pattern people[ Go to top ]

    We all have deadlines. But the ability to provide an elegant solution within the given deadline separates the professionals from the pretenders and wannabee. Its the guys who create lousy designs that keeps us IT people from going home early in the first place. Don't make the deadline an excuse for creating lousy and bad designs. Make a difference in the small community of software development. Let's be professionals guys. We only live once. Make sure you have a good legacy to leave behind.
  20. anti pattern people[ Go to top ]

    "We all have deadlines. But the ability to provide an elegant solution within the given deadline separates the professionals from the pretenders and wannabee."


    Exactly, "Anti Pattern People" is a perfect title and should have been a chapter in "Peopleware".

    I can tell you all of my large development projects in the past (present) were met with "design-up-front" resistance. The only reason I get to succeed with "architecting" applications is because of my stubborn personality, I never give up and hold on for the full life cycle. I've been doing consulting for over 10 years with Fortune 1000 clients, worked with many managers and developers, so far I've only met one client that supported a Object Oriented architecture.
  21. anti pattern people[ Go to top ]

    Then think about all those people out there who come up with an idea and slap a pattern tag on it. This site is also culpable of this, with their patterns link in the head banner. These 'patterns' that people are submitting are in some cases as valid as any GoF pattern, but oh so many are just a figments of imaginations, or in some cases just plain anti-patterns or sometimes just discussions about ideas.

    I agree with you here completely. When it comes to getting familiar with patterns and how to properly use them, there is a lot of good and bad information out there. One of the things I (and others) have been calling for is a peer-reviewed (perhaps with help from the PLoP/Hillside people) repository of patterns that can serve as a trusted source of great patterns (and other great information like common refactorings, best practices, etc.). Hopefully we can make this a reality soon. It would certainly be a valuable resource.

    Bill Willis
    PatternsCentral
  22. Making it work pays[ Go to top ]

    Layering needs to be applied in technology delivery as well. You can not
    shoot for the right abstraction from the begining. The approach should
    be to throw something on the wall and see if it sticks. Perfect abstraction
    should be driven by customers, not by architects (good or bad).

    Soumen Sarkar.
  23. The emperor has no clothes[ Go to top ]

    I don't get why people are surprised that EJB projects are not using domain models. Trying to use OO with EJB is like trying to put a square peg in a round hole. EJB has been a total disaster for OO. So called EJB "patterns" have often been hacks to work around the weak API's - like finder methods that only return one concrete class. EJB set enterprise software back by years, in my opinion. The emperor has no clothes, we all know it. Those of us who bought into it and got burned are pissed! All those books that show using Entity Beans for simple web apps. HA! EJB has a place. Simple web app is not one of them.

    -geoff
  24. clarification[ Go to top ]

    John says that he has spent the last two years working with Ford, Verizon, and E-Bay. He concludes that 'a lot of developers' don't even use domain models in their applications. Does anyone else find that frightening?

    >
    > And what happened to Dion's Exploding-Passenger-Plane 'humor' story?

    Just to clarify. Ford, Verizon and Ebay *do* use domain models. I was referring to many other developers I come across who don't use domain models.

    jc
  25. microserfs R us[ Go to top ]

    dot.net == toy

    Even bill gates doesn't know what to do with it.
  26. microserfs R us[ Go to top ]

    Toy! Well, list three point explaining your reasoning. Before you begin listing your points, you need to be qualified to list them. Meaning, you need to have an extensive working knowledge of the technology. What a sophomoric statement! Any one working with .Net will quickly realize, like java/j2ee, that there is some serious intelligence behind this. And they are both fantastic!
  27. I see TSS removed the text version of the latest interviews. Why is that? Can u pls put them back ...

    Regards,
    Horia
  28. Hi Horia -

    The text version is definately there. We changed the interface to show the ENTIRE text contents, instead of question by question (as users requested).

    If you look at the bottom pane, you can scroll down through the entire interview via text. The questions are links to the video content themselves... so you can "read along" ;)

    Cheers,

    Dion
  29. Right[ Go to top ]

    My mistake. Should sleep more. :)

    Regards,
    Horia
  30. ETA for full ebay 3.0?[ Go to top ]

    What is the ETA for Ebay 3.0 to be applied to the whole site? In the interview mr. Crupi talks about some "use cases" implemented in J2EE. What is the weight of those use cases in the total number of Ebay use cases? Did Ebay dropped totaly .NET from (near) future deployments?

    Hmmm ...

    Regards,
    Horia
  31. They claim to have used a distributed architecture for performance reasons, but wouldn't a non-distributed, clustered architecture with web servers serving only static content, be even more performant? Not to mention it would be a lot easier to develop.
  32. They claim to have used a distributed architecture for performance reasons, but wouldn't a non-distributed, clustered architecture with web servers serving only static content, be even more performant? Not to mention it would be a lot easier to develop.

    Uh... I can probably think of a couple of drawbacks of running an application like E-Bay by "serving only static content."
  33. Uh... I can probably think of a couple of drawbacks of running an application like E-Bay by "serving only static content."


    What are you talking about? I said that the WEB servers (Apache, IIS, etc.) would serve only static content. The non-static (i.e. generated by Servlets and JSPs) content would be served by the cluster of J2EE app. servers (Websphere instances in the eBay case).
  34. Does people on this thread only discuss politics? What about the particular architecture Sun consultants chose for the eBay app? Does everybody agree that it was the best one?
  35. It didn't use entity beans[ Go to top ]

    They didn't use Entity Beans. Of course not! Why would you use entity beans when you can use JDO. A thousand times easier then EB, plus fully OO. It actually wasn't clear from the interview if eBay is using JDO or something JDO-like. I have no idea why they would reinvent the wheel to do something jdo-like.
  36. It didn't use entity beans[ Go to top ]

    Yes, they didnt' use Entity Beans. They also didn't use any OR mapping product (JDO or proprietary). From the interview, for eBay they implemented the "Custom Domain Store" pattern (new in the second edition of the Core J2EE Patterns book).

    They did, however, use stateless Session Beans with remote interfaces. This is what I am trying to start a debate on: was it really needed (for the performance reasons stated in the interview)? Wouldn't be better to have an architecture without the overhead of serializing and transfering over the network potentially hundreds of objects for many user requests? By better I mean just as (or more) performant, with a smaller cost of development?
  37. It didn't use entity beans[ Go to top ]

    They did, however, use stateless Session Beans with remote interfaces. This is what I am trying to start a debate on: was it really needed (for the performance reasons stated in the interview)? Wouldn't be better to have an architecture without the overhead of serializing and transfering over the network potentially hundreds of objects for many user requests? By better I mean just as (or more) performant, with a smaller cost of development?


    Are you are advocating not using SSLBs at all?
    The primary advantage (and this really translates to multiple advantages) of using SSLBs over other means of encapsulating business logic is the ability to take advantage of container services in a standard, portable fashion. If your application does not need the benefits offered by these services, you might as well forgo the resultant benefits, or engineer your own infrastructural layers (meaning transactions, security, object distribution, clustering, resource mgmt etc etc). That, however, is not necessarily the best approach for larger applications.

    Or are you merely advocating the use of local interfaces only?
    That approach will only take you so far.

    Sandeep.
  38. It didn't use entity beans[ Go to top ]

    SSLBs


    I mean SLSBs.

    Sandeep.
  39. It didn't use entity beans[ Go to top ]

    Are you are advocating not using SSLBs at all?


    No, I would just like to know why they chose this architecture, and if they considered/benchmarked a collocated alternative (ie., using a single JVM for both dynamic content generation and domain logic execution, in a clustered configuration with many JVMs running the same eBay app).

    > The primary advantage (and this really translates to multiple advantages) of using SSLBs over other means of encapsulating business logic is the ability to take advantage of container services in a standard, portable fashion. If your application does not need the benefits offered by these services, you might as well forgo the resultant benefits, or engineer your own infrastructural layers (meaning transactions, security, object distribution, clustering, resource mgmt etc etc). That, however, is not necessarily the best approach for larger applications.

    AFAIK, they only used SLSBs for the remote access (object distribution) and probably also for the transaction demarcation. I suspect declarative EJB security was not used, nor "resource management". Clustering would necessarily be used, and a collocated design does not impede that. Declarative transactions are useful, but only slightly more convenient to use than the alternative. And object distribution, well this is my main question: what do they gained by doing it in this app?

    > Or are you merely advocating the use of local interfaces only?
    > That approach will only take you so far.

    It could be an option. The only thing they would be losing are the supposed benefits of object distribution. But what are those benefits, if any? I can certainly see some drawbacks to using remote interfaces in this case.
  40. It didn't use entity beans[ Go to top ]

    Are you are advocating not using SSLBs at all?

    >
    >> No, I would just like to know why they chose this architecture, and if they considered/benchmarked a collocated alternative (ie., using a single JVM for both dynamic content generation and domain logic execution, in a clustered configuration with many JVMs running the same eBay app).

    Currently, the easiest way to create a cluster (of JVMs) that hosts Java business logic is to code the logic as SSLBs. Convenient and ultimately portable. Also, if your business logic is spread across multiple JVMs, then how do you go about avoiding remote invocations anyway? Or are you specifically talking about reducing network I/O?
     
    >>> AFAIK, they only used SLSBs for the remote access (object distribution) and probably also for the transaction demarcation. I suspect declarative EJB security was not used, nor "resource management". Clustering would necessarily be used, and a collocated design does not impede that. Declarative transactions are useful, but only slightly more convenient to use than the alternative. And object distribution, well this is my main question: what do they gained by doing it in this app?

    It's cheaper, more maintainable, and more scalable in the long run, to cluster JVMs across relatively small machines (2-4 processor Xeons say) than buying a behemoth multi-million dollar high end machine so you can co-locate all your code. Besides, I don't think remote invocations are all that much of an overhead, if your application is well designed. Passing serialized business data back and forth between tiers across JVMs is more problematic. I don't recall John mentioning that they did not co-locate code when it made sense to do so.

    >>> It could be an option. The only thing they would be losing are the supposed benefits of object distribution. But what are those benefits, if any? I can certainly see some drawbacks to using remote interfaces in this case.

    I personally think that the best way to scale an application is to create clusters of relatively inexpensive machines, and add machines as you go along. Most J2EE servers can detect changes in cluster topology, so you can add or remote machines as you like based on load.

    Co-location is only so good. Besides, what about failover?

    Sandeep

    PS: BTW, I agree with the part about using different mechanisms for serving static and dynamic content. I am sure they do this now. Remember, the J2EE portion is probably a relatively small part of the overall app.
  41. It's cheaper, more maintainable, and more scalable in the long run, to cluster JVMs across relatively small machines (2-4 processor Xeons say) than buying a behemoth multi-million dollar high end machine so you can co-locate all your code.


    There is a big misunderstanding here: co-locating the code DOES NOT mean running it in a single server machine.
    It means to have only local calls between objects in a running JVM; the remote calls needed to enable the app to run in a cluster of several server machines (with any number of CPUs in each) are exclusivelly done by the app server itself, not the application code.
    Specifically, in a co-located deployment there would be only two kinds of remote communication: the first to perform Entity Bean cache invalidation, the second to perform HttpSession/SFSB replication. And the app programmer would not have to directly deal with any of this remote communication. By using EJBs with remote interfaces, however, the programmer directly makes the remote calls.
    So, a co-located clustered architecture could be just as scalable as one which is not strictly co-located. Rod Johnson's book on J2EE and Martin Fowler's book on enterprise application patterns both talk about this. It may seem unintuitive, but a J2EE application which only uses local interfaces can still run in a load-balanced cluster of several servers, with fail-over and all.
  42. I do not differ in opinion...[ Go to top ]

    OK. I think, initially, you and I were talking about two different things. I didn't quite get your drift the last time around. I thought you were talking about "remote invocations" in general.

    However, if you are referring to the designer's "choice of using remote interfaces" vs "remote invocations that may be performed by a server", I don't recall Crupi saying anything of that sort in the interview. What question was he responding to when he said that?

    Anyway, as you pointed out, those ARE two *separate* decisions that can be made in the context of co-location:
    1) Whether to locate the code in the same server instance (the actual co-location).
    2) Having chosen 1), whether to distribute code across multiple servers in the cluster.

    That said, I do not think that the position of "remote interfaces are always evil" is a valid one.

    From practical experience, I've seen cases when object distribution comes in handy, especially in distributed setups where security restrictions prevent co-location of code.

    My personal thumb rule is to co-locate code that is (relatively) tightly coupled, and use the option of remote interfaces for code that isn't, especially if I think that the code will end up running on two separate app servers. Sometimes it isn't all about design and architecture. It's also about politics, and issues like "your turf, my turf", as well as those related to security.

    Sandeep.

    > There is a big misunderstanding here: co-locating the code DOES NOT mean running it in a single server machine.

    > It means to have only local calls between objects in a running JVM; the remote calls needed to enable the app to run in a cluster of several server machines (with any number of CPUs in each) are exclusivelly done by the app server itself, not the application code.

    > Specifically, in a co-located deployment there would be only two kinds of remote communication: the first to perform Entity Bean cache invalidation, the second to perform HttpSession/SFSB replication. And the app programmer would not have to directly deal with any of this remote communication. By using EJBs with remote interfaces, however, the programmer directly makes the remote calls.
    > So, a co-located clustered architecture could be just as scalable as one which is not strictly co-located. Rod Johnson's book on J2EE and Martin Fowler's book on enterprise application patterns both talk about this. It may seem unintuitive, but a J2EE application which only uses local interfaces can still run in a load-balanced cluster of several servers, with fail-over and all.
  43. I do not differ in opinion...[ Go to top ]

    From practical experience, I've seen cases when object distribution comes in handy, especially in distributed setups where security restrictions prevent co-location of code.


    That said, I would say that if you were architecting a standalone solution, where you knew you could control all distribution aspects, perhaps a strict adherence to local interfaces is best.

    Sandeep.
  44. It would really be insightful to hear other opinions on this matter, i.e. what differences there are between these 2 architectural choices for a web-app:
      A: Servlets/JSPs calling to (remote) session-facade layer (using SLSBs)
      B: Servlets/JSPs calling co-localized facade ("service layer" implemented as POJOs).

    I can see the following commonalities/differences:
      o A/B: The Servlets/JSPs (i.e. the web-container) could be clustered over an arbitrary number of JVMs, probably needing HTTP-session-replication.
      o A: the SLSBs (i.e. the EJB-container) could be clustered over an arbitrary number of JVMs, no replication needed if session beans are really stateless
      o A adds overhead of remote calls to session-facade
      o with A you need to implement a layer of Data Transfer Objects (Value Objects) to be passed between presentation layer and session-facade
      o A can use *declarative* transactions and security
      o both can use *programmatic* transactions and security
      o for enhanced security, you could use an additional firewall in A between web-container and EJB-container
      o there is an additional load-balancing step in A when calling out to the session-facade: this could improve load-balancing if load between sessions is very unevenly distributed.
      o with A you can re-deploy business logic without touching presentation logic (is that worth anything?)

      anything else??,
      gerald
  45. Nice summary.

    This is becoming an increasingly important topic. There seems to be some amount of backlash against EJB (including SLSBs, MDBs are probably the exception) right now.

    On a product that I am working on right now, after having agreed to use Hibernate over CMP for persistence, the two choices you mentioned were the main, residual points of contention. We dismissed J2EE security as a talking point since the product had very complex authentication/authorization needs that would be difficult to satisfy using the default infrastructure.

    The arguments for SLSBs was primarily centered a) declarative transaction management, and b) effective load-balancing. The arguments for a pure web container-based architecture was a radical simplification of the design since there a) were much fewer classes overall, and b) no client-managed remoting.

    Since one of our main concerns was time-to-market, we eventually went with the second option, at the cost of explicitly writing JTA-based TxM code. In retrospect, the decision is one that will constrain us as far deployment options go, and possibly in scalability, but gained us several months as far as development goes.

    I guess it is all about trade-offs. On another project that I am wrapping up, we used EJB (no EBs), as our scalability needs were higher. Our initial load tests with clustered web containers and session replication did not meet expectations at all.

    Sandeep

    > It would really be insightful to hear other opinions on this matter, i.e. what differences there are between these 2 architectural choices for a web-app:
    > A: Servlets/JSPs calling to (remote) session-facade layer (using SLSBs)
    > B: Servlets/JSPs calling co-localized facade ("service layer" implemented as POJOs).
    >


    > I can see the following commonalities/differences:
    >   o A/B: The Servlets/JSPs (i.e. the web-container) could be clustered over an arbitrary number of JVMs, probably needing HTTP-session-replication.
    >   o A: the SLSBs (i.e. the EJB-container) could be clustered over an arbitrary number of JVMs, no replication needed if session beans are really stateless
    >   o A adds overhead of remote calls to session-facade
    >   o with A you need to implement a layer of Data Transfer Objects (Value Objects) to be passed between presentation layer and session-facade
    >   o A can use *declarative* transactions and security
    >   o both can use *programmatic* transactions and security
    >   o for enhanced security, you could use an additional firewall in A between web-container and EJB-container
    >   o there is an additional load-balancing step in A when calling out to the session-facade: this could improve load-balancing if load between sessions is very unevenly distributed.
    >   o with A you can re-deploy business logic without touching presentation logic (is that worth anything?)
    >
    >   anything else??,
    >   gerald
  46. I guess it is all about trade-offs. On another project that I am wrapping up, we used EJB (no EBs), as our scalability needs were higher. Our initial load tests with clustered web containers and session replication did not meet expectations at all.


    Interesting. Do you think this result is typical in co-located solutions? I, unfortunatelly, have not had the opportunity yet to perform such load tests.

    Intuitivelly, however, it seems to me that the scalability with both architectures should be about the same or better in the co-located case, in most real-world apps. The reasoning is that, for most request/response cycles in a J2EE web app, the CPU time spent in the presentation layer is larger than the CPU time spent in the EJB (business) layer (not counting the CPU time spent in the DB, of course); in a co-located architecture, there would be more CPUs available for both the presentation and business layers, with no need for having a thread block on a business method call, or to spend CPU time doing object serialization (not to mention the delay incurred by network transfer).

    Rogerio

  47. > Interesting. Do you think this result is typical in co-located solutions? I, unfortunatelly, have not had the opportunity yet to perform such load tests.

    well, this could be over-simplifying things, but:

    I tend to think of a clustered co-located system as a
            (single co-located system * the effective number of nodes),
    since, at least from the point of view of managing complexity, things sort of become that simple. That said, I am only an applications architect. :-|

    My default strategy (in a co-locatable scenario) involves co-locating all the business logic on a machine (N1) and using a combo web-server (N2/N3) (usually just one node) for serving static and dynamic content respectively. This does involve remote calls to the service layer. However, since these interfaces tend to be coarse-grained, there are very few invocations per request.

    Independent clustering strategies can be adopted for both N1 and N2/N3, with separate "add-on-demand" or "meet-estimated-peak-load" models. Those are my terms of course - there are a number of formal models that you could adopt. Of course the DB is usually on another box. I'ven't spent much time understanding clustering database server clustering.

    >> Intuitivelly, however, it seems to me that the scalability with both architectures should be about the same or better in the co-located case, in most real-world apps. The reasoning is that, for most request/response cycles in a J2EE web app, the CPU time spent in the presentation layer is larger than the CPU time spent in the EJB (business) layer (not counting the CPU time spent in the DB, of course); in a co-located architecture, there would be more CPUs available for both the presentation and business layers, with no need for having a thread block on a business method call, or to spend CPU time doing object serialization (not to mention the delay incurred by network transfer).

    Well, this depends upon the nature of the application. I've never really measured this, but I'd have to say that my business + service layer takes up more time than my presentation layer, which just marshalls requests, and co-ordinates logic is actually executed in the other layers.

    Either way, I agree with you that senseless remoting just for the sake of achieving a distributed architecture makes no sense. But remoting between tiers need not be all that expensive if you use coarse-grained interfaces.

    Sandeep.
  48. Slight correction back there[ Go to top ]

    and co-ordinates logic is actually executed in the other layers.


    and co-ordinates logic that is actually executed in the other layers.

    Sandeep.
  49. Gerald,

    IMHO, what you are talking about is probably the top (along with choice of the persistence mechanism) design issue in J2EE.

    Sandeep.
  50. Gerald,

    > It would really be insightful to hear other opinions on this matter, i.e. what differences there are between these 2 architectural choices for a web-app:
    > A: Servlets/JSPs calling to (remote) session-facade layer (using SLSBs)
    > B: Servlets/JSPs calling co-localized facade ("service layer" implemented as POJOs).
    > (...)

    I second your analysis. Load balancing at the remote SLSB only adds value if there's very uneven distribution in terms of workload; else, clustering web servers and/or databases is good enough. I prefer to stick to Fowler's First Law of Distributed Computing: "Don't distribute your objects". This is why many people choose A with local EJBs; the only real benefit that they want from SLSBs is the declarative transaction management.

    Note that B can have declarative transactions too, just not via EJB: The Spring Framework supports declarative transactions via its AOP framework, both on JTA and other transaction strategies. This is extremely lightweight; it even works in plain Tomcat with a locally managed DataSource. I strongly believe that this is a viable and convenient alternative to local SLSBs.

    What I like most about Spring's resource and transaction management is that it seamlessly adapts to any level of J2EE environment: Switching from a local DataSource and DataSourceTransactionManager (e.g. in Tomcat) to a JNDI DataSource and JtaTransactionManager (e.g. in WebLogic) is just a matter of configuration -- application code does not have to change.

    Generally, people tend to confuse *logical* multi-tier architectures with *physical* ones. In many cases, a local service layer is perfectly fine: The abstraction that a business layer facade delivers can be as good with POJOs as with local EJBs, especially with the means for wiring up loosely-coupled application objects that Spring offers, and with the option of transaction-enabling POJOs in a declarative manner.

    Juergen
    Spring Framework developer
    http://www.springframework.org
  51. For a situation like Ebay's, I would imagine that this is a problem set where loads are quite distinct depending upon the type of request being processed, and you'd want the flexibility of dispatching transparently to alternate JVMs with a mechanism like stateless session beans. Think of what EBay does, and the relative costs of various transactions - you'd want to load balance just beyond "least loaded server" or round robin or the like. You want to balance based on known-resource-hogs which are infrequently run vs. requests that are more common that need to be fast vs. [insert other criteria here].

    The point being - your post is knee-jerking towards "SLSBs are bad", with a weak assertion in the front "only adds value if there's very uneven distribution in terms of workload". You should take into account that if that criteria happens to be met (and I'd think it would be on a site like EBay), then the remainder of your argument falls apart.

    In real life, not all the world is even-workloads. Every project isn't read-mostly. In some domains developer time is significantly cheaper than downtime/slow response time.

    The point is - common practices often do not apply to many projects, because their characteristics do not match the generally accepted definitions of "common".

    I'll also point out (which I believe you would do if you want to keep expectations properly set) that Spring, while it may be the cat's ass, is so new that it still has that new car smell (< 1 year old), and as such it's not yet considered proven technology. And it suffers the old technology lock-in problem as well. If you don't practice a bit of truth-in-advertising you're unreasonably raising people's expectations, and are creating a backlash situation when people realize the full situation and history of the product.

    And to forestall inversion-of-control-means-no-lockin-arguments: if a site like EBay were to go the Spring route, and decided a year later they wanted to switch to a different approach, how much work would it involve? Compare and contrast to using a standard mechanism such as Stateless Session Beans.

        -Mike
  52. Hi Mike,

    I wasn't specifically talking about Ebay's scenario, admitted. I didn't want to imply that there is no value in remote SLSBs. It's just that *many* (and I mean it) web applications benefit much more from clustering at the web server level than from distribution at the remote component level, and thus do not need distributed objects. I guess I'm not alone in believing that.

    > The point being - your post is knee-jerking towards "SLSBs are bad", with a weak assertion in the front "only adds value if there's very uneven distribution in terms of workload". You should take into account that if that criteria happens to be met (and I'd think it would be on a site like EBay), then the remainder of your argument falls apart.

    Well, if the criteria happens to be met, then remote SLSBs are a good technology choice. There would at least be *some* amount of EJB in the middle tier then, at a coarse-grained facade level. This obviously involves choosing JTA as transaction strategy and is an alternative to transaction management via Spring, agreed. Note that Spring doesn't intend to replace *remote* SLSBs at all; it just offers an alternative for *local* SLSBs if just adopted for declarative transactions.

    All of this is orthogonal to using Spring in general, anyway: Within the implementation of the EJBs, you could use a Spring bean factory and for example the JDBC or Hibernate support. Within the web layer, you could use Spring's web MVC, and an application context for wiring up your remote proxies. We even have dedicated EJB support, both for implementation and access. Both in the local and in the remote case, you can wire EJB proxies to your application objects without any custom lookups.

    > I'll also point out (which I believe you would do if you want to keep expectations properly set) that Spring, while it may be the cat's ass, is so new that it still has that new car smell (< 1 year old), and as such it's not yet considered proven technology. And it suffers the old technology lock-in problem as well.

    > And to forestall inversion-of-control-means-no-lockin-arguments: if a site like EBay were to go the Spring route, and decided a year later they wanted to switch to a different approach, how much work would it involve? Compare and contrast to using a standard mechanism such as Stateless Session Beans.

    True, the codebase is pretty new. It's based on Rod's framework from "Expert J2EE Design and Development" though, so it's already more than 1 year old in total. But agreed, it's not "proven" technology yet. Regarding lock-in: Well, there will be some lock-in with any technology. But applying IoC principles very consequently, Spring does a good job in keeping framework dependencies at a minimum (not none at all, of course).
     
    Regarding the EBay example: I expect the effort of migrating a Spring-based solution to something else to be significantly less than migrating an EJB-based solution to a non-EJB one. Of course, a standard mechanism like EJB allows to migrate from one EJB container to another; but migrating the architecture is a completely different thing.

    There is no golden hammer; there is probably some place for any technology. I just believe that a large percentage of J2EE applications is better off with lighter-weight solutions than a local or remote EJB middle tier. So many applications are still about one single server and one single database, after all.

    Juergen
  53. One more note: I like to consider individual problems that a technology addresses, individual benefits that it gives. Transactional execution, persistence, remoting are different aspects (no AOP pun intended) of an enterprise system. EJB tries to cover them all in one component model, while frameworks like Spring allow to choose them a la carte.

    There are good individual solutions for any of these aspects. Most people would probably agree that there are better persistence solutions than Entity Beans. I consider declarative transactions as another one of those: You can have them individually too, with hardly any effort. Such selective choice allows for combining best-of-breed solutions.

    So this leaves reliable remoting with load balancing, transaction propagation, etc: This could also be solved individually, for example via a good Web Services tool (not yet I guess); but it's one thing that remote EJBs are good at, and they deserve to be used for it. If you don't need distributed components though, then there are viable and much more lightweight alternatives for transaction demarcation and persistence.

    Juergen
  54. \Juergen Hoeller\
    One more note: I like to consider individual problems that a technology addresses, individual benefits that it gives. Transactional execution, persistence, remoting are different aspects (no AOP pun intended) of an enterprise system. EJB tries to cover them all in one component model, while frameworks like Spring allow to choose them a la carte.
    \Juergen Hoeller\

    There's a bit o' FUD in this one, Juergen. Using an SLSB says nothing about my persistence model. Indeed, an SLSB may not be talking to a persistent store at all.

    Using an SLSB also does not mean that you are actually remoting. It means you _can_ remote. The developer pays some development cost for this, of course, but otherwise you don't pay a runtime remoting cost if you're not actually remoting.

    And, of course, using SLSBs does not mandate using Entity EJBs.

    As for the properties of SLSBs, you have many containers to choose from.

    In short, EJB looks alot more a la carte than you're indicating. I understand you like Spring and want to propogate its use, but you're making way too many blanket statements at the expense of EJBs while trying to make Spring look more attractive.

        -Mike
  55. There's a bit o' FUD in this one, Juergen. Using an SLSB says nothing about my persistence model. Indeed, an SLSB may not be talking to a persistent store at all.


    FUD? No, an integrated component model isn't a *that* bad thing ;-) Of course, SLSBs have no direct connection to persistence, but EJB as a component model has. I still say the dirty word: After all, there's just a more than 600 page EJB spec, not separate SLSB, SFSB, MDB, and EB specs.

    > Using an SLSB also does not mean that you are actually remoting. It means you _can_ remote. The developer pays some development cost for this, of course, but otherwise you don't pay a runtime remoting cost if you're not actually remoting.

    It's still integrated though: The same underlying container that implements the transaction management also cares for remoting (and EB persistence). The individual aspects are not pluggable in a best-of-breed style. It's normally even worse: The same container also provides servlet/JSP and XA services. So if you're not satisfied with one single part (like the XA impl), you have to switch the entire container or live with the deficiencies of your current one.

    That's not my idea of choice: If I don't like the remoting implementation, I'd like to be able to switch to a different one on the same basic server, i.e. the same web server and servlet/JSP container. The web container might be the best of its class; why should I have to throw it out just because some other part that happens to be integrated into the same server is buggy?

    > In short, EJB looks alot more a la carte than you're indicating.

    We're talking about a different kind of "a la carte". Anyway, of course EJB can provide good enough middle tier services for many scenarios -- I advise everybody to go and use it if you're happy with its development style. But as you've already stated yourself, there might be simpler and more flexible solutions coming up, even if they are still rather young at the moment.

    > Forgive me, but this is a rather Spring-centric viewpoint. Please re-read the above from the standpoint of an objective viewer, someone not invested in the outcome, and it seems a bit silly.

    Please don't misunderstand the sentence "there can hardly be an issue that would make you want switch out Spring completely": The emphasis is on *completely*. Of course there can and will be bugs and annoyances, even severe ones. But the framework consists of multiple, individually usable parts that are not strongly tied to the other parts. Due to the nature of the framework, it is not one big blob that needs to be thrown out completely or not at all.

    > So if I'm comparing the two it's only because you started it :-)

    A fair point :-)

    Finally, regarding remoting: I consider remoting at the *boundary* of a server system, like for rich clients, far more interesting than remoting *within* a server system, like at the component level. I tend to design J2EE apps not for being distributed internally but rather for making it easy to export services to other kinds of clients than web browsers. Of course, I guess I'm typically involved in different kinds of projects than you are.

    Juergen
  56. \Juergen Hoeller\
    FUD? No, an integrated component model isn't a *that* bad thing ;-) Of course, SLSBs have no direct connection to persistence, but EJB as a component model has. I still say the dirty word: After all, there's just a more than 600 page EJB spec, not separate SLSB, SFSB, MDB, and EB specs.
    \Juergen Hoeller\

    I don't care what the J2EE-approved component model is, or how big the overall spec is (that's a problem for AppServer vendors). From a user perspective, a Stateless Session Bean is a pretty simple thing to understand, use, and code. Pulling in the XXX hundred page spec (where 90% of it has to do with crazy entity bean stuff) is, IMHO, a bit of fudstering.

    \Juergen Hoeller\
    It's still integrated though: The same underlying container that implements the transaction management also cares for remoting (and EB persistence). The individual aspects are not pluggable in a best-of-breed style. It's normally even worse: The same container also provides servlet/JSP and XA services. So if you're not satisfied with one single part (like the XA impl), you have to switch the entire container or live with the deficiencies of your current one.
    \Juergen Hoeller\

    This does not have to be the case at all. I agree that the current commercial AppServers do it this way, but it doesn't have to be that way. See the work on the new Apache J2EE stuff, which is targetted to explicitly allow you to do exactly what you're describing. See also Jonas and JBoss and OpenEJB and Tyrx. And various JMS implementations, various JNDI implemenations, etc etc.

    Please note also that servlet/JSP engines have never been forcibly tied to the rest of app server containers. You can use a light-weight JSP engine calling into Weblogic or Websphere if that floats your boat.

         -Mike
  57. I don't care what the J2EE-approved component model is, or how big the overall spec is (that's a problem for AppServer vendors). From a user perspective, a Stateless Session Bean is a pretty simple thing to understand, use, and code. Pulling in the XXX hundred page spec (where 90% of it has to do with crazy entity bean stuff) is, IMHO, a bit of fudstering.


    Calling Entity Beans "crazy" could be considered FUD too ;-) Seriously, I'd love the EJB 3.0 expert group to get rid of Entity Beans completely, making the spec so much slimmer. What's a problem for app server vendors is ultimately also negative for users: Vendors have to provide so much, ehm, not so useful stuff to get certified, when they could concentrate on being better in what's really important, like the web container, reliable SLSB remoting, etc.

    > This does not have to be the case at all. I agree that the current commercial AppServers do it this way, but it doesn't have to be that way. See the work on the new Apache J2EE stuff, which is targetted to explicitly allow you to do exactly what you're describing. See also Jonas and JBoss and OpenEJB and Tyrx. And various JMS implementations, various JNDI implemenations, etc etc.

    I endorse that trend! You're right, it's what I'm talking about from a server perspective. Of course, application frameworks can make your life easier top-down instead of bottom-up: By not requiring certain container services at all, making migration between servers much easier. Those two approaches meet in the middle: Lightweight application frameworks and pluggable container services complement each other.

    > Please note also that servlet/JSP engines have never been forcibly tied to the rest of app server containers. You can use a light-weight JSP engine calling into Weblogic or Websphere if that floats your boat.

    This way, you can just call into the backend server *remotely*, which is not what I'm talking about. For colocated apps and services, it's still take all or nothing.

    An application will definitely be easier to deploy and manage if it does not require certain container services at all. For example, if you're just accessing one single database anyway, then why delegate to the container's XA-capable JTA service? A lightweight single-DataSource transaction strategy like the one that Spring offers just requires JDBC.

    If you don't call into a complex subsystem like JTA at all, then you cannot run into problems with it on your particular server. That makes most sense if you don't require its particular services in the first place. Note that you can still *choose* to use container-managed DataSources and JTA; the difference is that it is just necessary if you're actually requiring distributed transactions.

    For another example, a special Hibernate issue: If you're using JTA or EJB CMT for that matter, you need to register a special server-specific transaction manager lookup with Hibernate for proper JVM-level read-write caching. When using a single-SessionFactory transaction strategy like Spring's HibernateTransactionManager, you don't delegate to JTA; therefore, you don't have the problems of JTA transaction completion callbacks in the first place.

    Juergen
  58. \Juergen Hoeller\
    Calling Entity Beans "crazy" could be considered FUD too ;-)
    \Juergen Hoeller\

    It ain't FUD if it's true.

    \Juergen Hoeller\
     Seriously, I'd love the EJB 3.0 expert group to get rid of Entity Beans completely, making the spec so much slimmer.
    \Juergen Hoeller\

    I don't think you'll see entity beans removed completely anytime soon. I'd
    prefer just to see them split out into a seperate standard, and then let it die its own quiet death.

    \Juergen Hoeller\
     What's a problem for app server vendors is ultimately also negative for users: Vendors have to provide so much, ehm, not so useful stuff to get certified, when they could concentrate on being better in what's really important, like the web container, reliable SLSB remoting, etc.
    \Juergen Hoeller\

    I understand the sentiment, but most containers already do the above well. Or are you just talking about barriers of entry into J2EE spec land for a new container impl? If the latter, perhaps for the best - in that world I can just see Rod pushing the RJ Spring Application Server on the unsuspecting masses....

    \Juergen Hoeller\
    I endorse that trend! You're right, it's what I'm talking about from a server perspective. Of course, application frameworks can make your life easier top-down instead of bottom-up: By not requiring certain container services at all, making migration between servers much easier. Those two approaches meet in the middle: Lightweight application frameworks and pluggable container services complement each other.
    \Juergen Hoeller\

    Um, I'm afraid you've lost me. You were just arguing that J2EE sucked because isn't wasn't a la carte.

    That aside....I think J2EE has proved, if nothing else, that container services _are_ extremely useful for application development. All a framework like Spring does is hide it a bit more completely than Websphere/Weblogic/et al do. But the container services are still there. From my own user-oriented perspective (being a user), the biggest difference is that the various app servers are indeedy standard, and (to belabor the point) Spring is not. Nor is it likely to ever be.

    \Juergen Hoeller\
    This way, you can just call into the backend server *remotely*, which is not what I'm talking about. For colocated apps and services, it's still take all or nothing.
    \Juergen Hoeller\

    For commercial guys, yes - and many people would say "so what". Remoteness isn't always quite as bad as you assert. The widespread use of 100MBps networks has gone a long way to ease remoting pain. And the lucky buggers with Gigabit have it even better.

    This is not to say that remoting should be transparent, or is always the right solution - but you're dismissing it out of hand. For alot of people it makes their lives easier, and is an excellent option.

    And note that, for non-commercial guys, your argument falls flat. You can plug-in servlet engines in the same process.

    \Juergen Hoeller\
    An application will definitely be easier to deploy and manage if it does not require certain container services at all. For example, if you're just accessing one single database anyway, then why delegate to the container's XA-capable JTA service? A lightweight single-DataSource transaction strategy like the one that Spring offers just requires JDBC.
    \Juergen Hoeller\

    Sorry, but this smells like more Fuddery to me. Using an app server's built-in transaction manager has never been a challenging task for me. What complexities there are in it are pretty much tucked away from me. Indeed, my old SLSB code from several years ago that never heard of XA runs just fine under modern containers' XA transaction managers. If I happen to be using an app server and it happens to have an XA transaction manager, what do I care about it?

    \Juergen Hoeller\
    If you don't call into a complex subsystem like JTA at all, then you cannot run into problems with it on your particular server. That makes most sense if you don't require its particular services in the first place. Note that you can still *choose* to use container-managed DataSources and JTA; the difference is that it is just necessary if you're actually requiring distributed transactions.
    \Juergen Hoeller\

    <sarcasm>
    Yes, marking my SLSB as "TRANACTION_REQUIRED" is harrowing and tramatic. Having the container automatically downgrade 2PC transactions to the one-phase optimization is invisible, but I should somehow worry about it. Having the container automatically include a JMS publish in a transaction and ensure 2PC is an equally a nerve jangling experience.
    </sarcasm>

    Come on, give me a break Juergen. These are made up straw man arguments.

    \Juergen Hoeller\
    For another example, a special Hibernate issue: If you're using JTA or EJB CMT for that matter, you need to register a special server-specific transaction manager lookup with Hibernate for proper JVM-level read-write caching. When using a single-SessionFactory transaction strategy like Spring's HibernateTransactionManager, you don't delegate to JTA; therefore, you don't have the problems of JTA transaction completion callbacks in the first place.
    \Juergen Hoeller\

    That's Hibernate's problem, not mine :-)

    Incidentally, I just rechecked the latest Spring stuff, and found to my surprise that every damn piece of it has a different package name. You guys just completely broke every user's source code and you're advocating it as a alternative to something standard like J2EE? This is the sort of open source playing around that just drives people away in droves.

        -Mike
  59. Incidentally, I just rechecked the latest Spring stuff, and found to my surprise that every damn piece of it has a different package name. You guys just completely broke every user's source code and you're advocating it as a alternative to something standard like J2EE? This is the sort of open source playing around that just drives people away in droves.


    We've changed package names from "com.interface21" to "org.springframework" as of our first 1.0 milestone release. That has been announced as early as release 0.9 and isn't a big deal at all: For example, Hibernate has also changed root package names from 1.x to 2.0. Migration is straightforward, a simple search-and-replace, and has not posed a problem for any user that I've come across. So please relax a bit!

    BTW, we've never advocated Spring as an alternative J2EE, holy smoke! We were just talking about that particular feature called declarative transactions that competes with local SLSBs, remember? We build on J2EE wherever we can, from Servlets, JSPs, optional JTA to JNDI DataSources. Spring is and always will be an application framework, not an application server... come on, I shouldn't have to tell you those things!

    Juergen
  60. \Juergen Hoeller\
    We've changed package names from "com.interface21" to "org.springframework" as of our first 1.0 milestone release. That has been announced as early as release 0.9 and isn't a big deal at all: For example, Hibernate has also changed root package names from 1.x to 2.0. Migration is straightforward, a simple search-and-replace, and has not posed a problem for any user that I've come across. So please relax a bit!
    \Juergen Hoeller\

    I've worked in many environments where an upgrade would be outright refused if it involved changing all of the software that touches it (even if the change is trivial). Perhaps it hasn't posed a problem for anyone because it's not being seriously used?

    \Juergen Hoeller\
    BTW, we've never advocated Spring as an alternative J2EE, holy smoke! We were just talking about that particular feature called declarative transactions that competes with local SLSBs, remember? We build on J2EE wherever we can, from Servlets, JSPs, optional JTA to JNDI DataSources. Spring is and always will be an application framework, not an application server... come on, I shouldn't have to tell you those things
    \Juergen Hoeller\

    I have to disagree with you on this. The message you've sent, loud and clear, is that Spring is an alternative to using full blown J2EE services in an app server.

        -Mike
  61. Mr. know-it-all and never-satisfied,

    > I've worked in many environments where an upgrade would be outright refused if it involved changing all of the software that touches it (even if the change is trivial). Perhaps it hasn't posed a problem for anyone because it's not being seriously used?

    The latter remark is just silly and more FUD that I might have spread any time in this thread. We have some pretty "serious" users, you might want to check the "Introduction to Spring" thread again, and maybe check the developers list from time to time. BTW, the company that I'm affiliated with is building all its J2EE products on Spring. We are not full time framework builders; all of use are involved in real life application projects in our day jobs.

    > I have to disagree with you on this. The message you've sent, loud and clear, is that Spring is an alternative to using full blown J2EE services in an app server.

    One last time: Spring happens to offer alternatives to *certain* J2EE services for *certain* scenarios, like lightweight transaction management for a single database instead of delegating to JTA, and declarative transactions for POJOs instead of using local EJBs just for that single benefit. We are not offering alternatives for any other J2EE services; in fact, most existing Spring apps are J2EE web applications that inherently depend on the full J2EE web monty.

    'Nuff said, I don't want to repeat myself. Re-reading the relevant postings with an open and unbiased mind gives a clear enough picture.

    Juergen

    P.S.: Good one, Cameron - your new sig ;-)
  62. \Juergen Hoeller\
    Mr. know-it-all and never-satisfied,
    \Juergen Hoeller\

    It's been conclusively demonstrated on many occasions that I do not know it all, or even a significant fraction there-of. It's also been conclusively shown that I can come agreement with people, or disagree with people, or discover that my viewpoint was wrong and accede that I was in error.

    It just so happens that in this case, those cases don't seem to apply :-)

    \Juergen Hoeller\
    The latter remark is just silly and more FUD that I might have spread any time in this thread. We have some pretty "serious" users, you might want to check the "Introduction to Spring" thread again, and maybe check the developers list from time to time. BTW, the company that I'm affiliated with is building all its J2EE products on Spring. We are not full time framework builders; all of use are involved in real life application projects in our day jobs.
    \Juergen Hoeller\

    People are using Spring, this is clear. Not many, but some are. Some are claiming that they're using Spring for mission critical software that has been deployed in production - I'm impressed, since the first recorded downloads were from less than 4 months ago (must be some mission critical app to be developed and deployed in under 4 months). I understand that many shops can play fast and loose with what they use. I even understand that there are people who don't mind a new upgrade of a package literally breaking every file that uses it.

    As for "We are not full time framework builders", my response is the same it has always been: _I do not care_. Your employment status has no bearing on whether or not the product you are pushing (and pushing aggressively at that) is relevant to people's needs.

    \Juergen Hoeller\
    One last time: Spring happens to offer alternatives to *certain* J2EE services for *certain* scenarios, like lightweight transaction management for a single database instead of delegating to JTA, and declarative transactions for POJOs instead of using local EJBs just for that single benefit. We are not offering alternatives for any other J2EE services; in fact, most existing Spring apps are J2EE web applications that inherently depend on the full J2EE web monty.
    \Juergen Hoeller\

    Thanks for the clarification. And if you want to know why I become a bit exasperated, note how your posts have now changed from originally saying that you're not a J2EE alternative to say you are an alternative ("to *certain* J2EE services"). Also keep in mind on the exasperation front that a thread about the architecture of EBay using J2EE was turned into a Spring advertising campaign.

         -Mike
  63. <Mike Spille>
    People are using Spring, this is clear. Not many, but some are. Some are claiming that they're using Spring for mission critical software that has been deployed in production - I'm impressed, since the first recorded downloads were from less than 4 months ago (must be some mission critical app to be developed and deployed in under 4 months). I understand that many shops can play fast and loose with what they use. I even understand that there are people who don't mind a new upgrade of a package literally breaking every file that uses it.
    </Mike Spille>

    I've lurked on this thread and enjoyed sitting back and watching you (Mike) and Juergen duke it out. However, I take this comment as a direct attack, since (to my knowledge) myself and Dmitry Kopylenko (Rutgers University) are the primary mission critical live apps out there. I can't speak for Dmitry, and I'll give some details on our app below, but I have 2 questions. Do your comments mean that if you release early, often, and iteratively it's not a mission critical app? Are you implying that if you research the code, find it to be well-developed and ideal for your needs, and use it in production apps you are "playing fast and loose".

    Mission-critical depends on the business, and while we don't adhere strictly to XP, we do follow a number of it's principles including release early and often. We adopted VERY early and helped with getting Spring from "book code" on Wrox over to the sourceforge project. The first live iteration was deployed in April (1 1/2 months after sourceforge), and we have participated in the development of Spring while using the February codebase for our live app. We've released additional functionality live every 3 weeks (approx), and the app has grown in a stable and consistent way (and our client has the benefit of using the functionality without waiting for the entire thing to be finished).

    Last month we migrated to the cvs version of Spring with very few problems. It didn't take very long to upgrade and did NOT "literally break every file that uses it". As has been pointed out, Spring allows you to develop an app without introducing a lot of dependencies on the framework. A few global search and replace, and rewriting our xml file to the current Spring format, nothing that should cause problems (especially for a competent developer like yourself).

    Trevor
  64. \Trevor Cook\
    I take this comment as a direct attack [....]
    \Trevor Cook\

    It wasn't meant as an attack, although I understand how it might read that way. Let's just say that "mission critical app" has come to mean a very large number of things, to the point that it's tough to know what someone means when they say it.

    \Trevor Cook\
    Do your comments mean that if you release early, often, and iteratively it's not a mission critical app? Are you implying that if you research the code, find it to be well-developed and ideal for your needs, and use it in production apps you are "playing fast and loose".
    \Trevor Cook\

    There is a very strong possibility that we have a different idea of what "mission critical" means, what rules may surround such an application, and specifically how one deals with risk.

    Here is what "mission critical" means to me. This is not presented as a well-formed argument, but rather a free-form stream of statements that more accurately conveys the gestalt of what mission critical means to me:

       - If it doesn't work, people can be fired.
       - If it doesn't work, there are serious repurcussions to the company
       - If it doesn't work, alot of money can be lost
       - If we rely on a flakey vendor, and they flake out, our maintenance costs may sky rocket.
       - Changing code _always_ involves a risk. Every time a developer says "it's a one line change", whole ranks of management cringe with memories of the last time a developer fed them that line.
       - New features and stability are on opposing poles. Both are equally important. They have to be balanced.
       - Every release by development has to be independently certified in QA, and again independently certified in production.
       - Developers must have senior management approval to _touch a keyboard for a production machine_.
       - If your code for a mission critical app is shown to be the cause of a failure, kiss your bonus goodbye.
       - Release early/release often means critical features are missing, or critical bugs only get fixed over many iterations.
       - 5 nines reliability is our minimum requirement. We would strongly prefer more.
       - Every new release will introduce new bugs. This is a proven, statistical fact.
       - Initial response to failures and recovery from those failures has an equal precedence to the primary application functionality.
       - The productions operations group has a unlilateral right to veto a release.

    That is a taste of what I consider "mission critical". This experience comes from slightly over a decade in the financial services arena.

    The above free-association list pretty closely matches what alot of other people also consider "mission critical". These types of apps are clearly seperated from other "softer" apps where the requirements aren't nearly as strenuous.

    But in the realm of mission critical - the above do apply. If you'll enter my world for just a moment....

    Now imagine you have what I call a mission critical app in production. Now imagine getting the various people involved in that app and telling you that you really need to upgrade to Spring M1 (or whatever), and that it involves changing, saying, 30 source files.

    This sort of thing may seem insane to some people - but this is what is _really_ meant by "mission critical" application - emphasis on "critical".

    Where you say:

    "We've released additional functionality live every 3 weeks (approx), and the app has grown in a stable and consistent way (and our client has the benefit of using the functionality without waiting for the entire thing to be finished)."

    I say "bravo to you, but you don't release additional functionality every 3 weeks to a mission critical application". Again, I emphasize the "critical" part. What you've described to me may be a very interesting app used by some people in a production setting, but what is it that makes it "critical"?

        -Mike
  65. <Mike Spille>
    It wasn't meant as an attack, although I understand how it might read that way. Let's just say that "mission critical app" has come to mean a very large number of things, to the point that it's tough to know what someone means when they say it.
    </Mike Spille>
    I did read it that way, but I'll accept that it wasn't an attack. I just take a lot of pride in my work (as I'm sure you do). Anyway, that's done. Now we can continue agreeing/disagreeing depending where the topic takes us.

    As far as the definition of a mission-critical app, you're totally correct that everyone has there own interpretation, and I like your list below which is a good point-of-reference to continue discussions (hope we're not getting too far off topic here :) ).

    <Mike Spille>
    Here is what "mission critical" means to me. This is not presented as a well-formed argument, but rather a free-form stream of statements that more accurately conveys the gestalt of what mission critical means to me:
        - If it doesn't work, people can be fired.
        - If it doesn't work, there are serious repurcussions to the company
        - If it doesn't work, alot of money can be lost
        - If we rely on a flakey vendor, and they flake out, our maintenance costs may sky rocket.
        - Changing code _always_ involves a risk. Every time a developer says "it's a one line change", whole ranks of management cringe with memories of the last time a developer fed them that line.
        - New features and stability are on opposing poles. Both are equally important. They have to be balanced.
        - Every release by development has to be independently certified in QA, and again independently certified in production.
        - Developers must have senior management approval to _touch a keyboard for a production machine_.
        - If your code for a mission critical app is shown to be the cause of a failure, kiss your bonus goodbye.
    </Mike Spille>
    Agree with all of the above (and the project I'm describing meets these).

    <Mike Spille>
        - Release early/release often means critical features are missing, or critical bugs only get fixed over many iterations.
    </Mike Spille>
    Don't know whether I agree or disagree with this. Consider a system that must place orders and receive invoices. Both are critical/required. Releasing an order system allows them to use it before the invoicing system is ready. In this scenario critical features are not missing from orders, but from an entire system perspective, the critcal invoice system is missing. However, you have solved the problem for the AR department, just not the AP department.

    <Mike Spille>
        - 5 nines reliability is our minimum requirement. We would strongly prefer more.
    </Mike Spille>
    I'm not sure I agree that 5-nines defines "mission-critical". If the system is a payroll system which has data entered during business hours, and then runs a long process (even overnight or multiple days), than the business requirement is that the system performs during that period of time. If for some reason it isn't available outside that period, it still fulfills the business requirements, but it doesn't meet 5-nines. However, that system is mission-critical in my opinion, since the business can't run without it. My personal opinion is that 5-nines is very necessary in specific situations, but on a whole it is an overused buzzword which has little bearing on many real-world mission-critical apps. Also, systems which have 5-nines as a requirement fail to meet it (Amazon and Ebay as fairly prominent examples). Anyway, for the purposes of this discussion, my project doesn't have this requirement, nor does it meet it (although we're pretty close).

    <Mike Spille>
        - Every new release will introduce new bugs. This is a proven, statistical fact.
    </Mike Spille>
    Great, now I have to fight facts. I'll agree with this depending on what release means. If a new release means new features, yes, there are normally bugs (although with any decent testing/QA procedures, they should be few and outside the most used paths through the system). However, a release which only fixes existing bugs does not necessarily introduce new bugs. In our project, a new release consists of both new features and bug fixes, and we do release new bugs outside the normal operational path every 3 weeks (hard for a proud and semi-talented developer to admit).

    <Mike Spille>
        - Initial response to failures and recovery from those failures has an equal precedence to the primary application functionality.
        - The productions operations group has a unlilateral right to veto a release.
    </Mike Spille>
    Agree with these, and they apply to my project.

    <Mike Spille>
    That is a taste of what I consider "mission critical". This experience comes from slightly over a decade in the financial services arena.
     
    The above free-association list pretty closely matches what alot of other people also consider "mission critical". These types of apps are clearly seperated from other "softer" apps where the requirements aren't nearly as strenuous.

    But in the realm of mission critical - the above do apply. If you'll enter my world for just a moment....

    Now imagine you have what I call a mission critical app in production. Now imagine getting the various people involved in that app and telling you that you really need to upgrade to Spring M1 (or whatever), and that it involves changing, saying, 30 source files.

    This sort of thing may seem insane to some people - but this is what is _really_ meant by "mission critical" application - emphasis on "critical".
    </Mike Spille>

    Doesn't seem insane, every system has it's requirements. Depending how you interpret my answers, I think I agree with all your points except for the "5-nines", and meet all the above criteria except for "5-nines". The systems I work on can generally afford short periods of unexpected interruption. However, if the system stays down for extended periods (for some clients, this is hours, for others weeks), the company could go out of business. If "5-nines" is what truly makes an app mission-critical I guess mine aren't. TO me, the reliability uptime depends on the company and specific requirements. To me mission critical is defined by "if the system stays down longer than a specified period of time the company can no longer continue operating" where that specified period COULD be "5-nines" but doesn't have to be.

    As far as changing 30 files being insane, it depends. I've had to change hundreds, but it's generally been a simple search-replace text change, with a few specific code rewrites. However, due to my knowledge of the Spring codebase and our test coverage, this hasn't caused any problems. "For the record", my knowledge of Spring internals is not as thorough as Rod or Juergen's. I understand it to the level that most people understand internal code released from seperate departments (know a bit more than the public api, but haven't read every nook and crannie).

    If you are scared of making 30 text replacements of "a" to "b", or reversing a parameter signature, I would suggest that their is a problem with the testing procedures being followed. However, if it's a timing issue (I don't have time to do that search-and replace) than I would agree that it is too early to move to Spring since (despite our best efforts) the public api will change before the final release.

    <Mike Spille>
    Where you say:

    "We've released additional functionality live every 3 weeks (approx), and the app has grown in a stable and consistent way (and our client has the benefit of using the functionality without waiting for the entire thing to be finished)."

    I say "bravo to you, but you don't release additional functionality every 3 weeks to a mission critical application". Again, I emphasize the "critical" part. What you've described to me may be a very interesting app used by some people in a production setting, but what is it that makes it "critical"?
    </Mike Spille>
    The "critical" part is that without the system, the company can't function. My experience is that most companies need multiple departments to function. As an example, if you can't order raw parts for the factory, the company doesn't make any money. However, even if you can order them, but the system to run the machines doesn't work, the company still isn't operating. In this example, an ordering system would be mission-critical, the factory system would be mission-critical, or both together would be mission-critical. Releasing them iteratively or as a single release does not change their mission-critical nature (in my opinion).
  66. [Sorry for the lateness of reply, work got a touch busy ]

    Taken somewhat out of order....

    \Trevor Cook\
    Great, now I have to fight facts. I'll agree with this depending on what release means. If a new release means new features, yes, there are normally bugs (although with any decent testing/QA procedures, they should be few and outside the most used paths through the system). However, a release which only fixes existing bugs does not necessarily introduce new bugs. In our project, a new release consists of both new features and bug fixes, and we do release new bugs outside the normal operational path every 3 weeks (hard for a proud and semi-talented developer to admit).
    \Trevor Cook\

    My view of release doesn't cover only code, but the code plus all of the various paths/processes involved in migrating a system from development out to production. In the places I've worked on mission critical systems, each time a release was made in production, it was considered a destabilizing event. This has nothing to do with software theory, or "I only changed three lines of code to fix a bug", or the competence of the processes or the people involved. It's based on hard, cold experience. To grizzled, veteran managers, releases tend to break things. Code is just one aspect, where a developer may introduce a bug, subtle or blatant. Problems can also crop up in configuration, environment, pilot error in installing the release, upgrading persistent data stores, interactions with external systems, etc. This is a partial list, but you get the idea.

    Where I work now, releases are considered a really, really destabilizing event. Each release is required to be able to work in parallel with the previous release. Releases are phased in slowly with one server/one client (or what's appropriate) and slowly pushed out to an ever widening audience if all goes OK - and with the ability to pull the plug on the new release and revert everything back to the parallel running older stuff on a moment's notice.

    This is, I admit, a rather draconian approach, and it shows how there can be varying levels of "mission critical". But the reasoning behind it is sound for any system with any serious level of "critical". Releases tend to break things.
     
    \Trevor Cook\
    As far as changing 30 files being insane, it depends. I've had to change hundreds, but it's generally been a simple search-replace text change, with a few specific code rewrites. However, due to my knowledge of the Spring codebase and our test coverage, this hasn't caused any problems. "For the record", my knowledge of Spring internals is not as thorough as Rod or Juergen's. I understand it to the level that most people understand internal code released from seperate departments (know a bit more than the public api, but haven't read every nook and crannie).

    If you are scared of making 30 text replacements of "a" to "b", or reversing a parameter signature, I would suggest that their is a problem with the testing procedures being followed. However, if it's a timing issue (I don't have time to do that search-and replace) than I would agree that it is too early to move to Spring since (despite our best efforts) the public api will change before the final release.
    \Trevor Cook\

    Mission critical systems tend to have a risk analysis for each planned release into production. "Risk analysis" here doesn't have to be a fancy process - it may be an informal conversation over a cup of coffee between the developers and a manager, or it could be a full-blown committee/report deal. In either case, if you really are talking mission-critical, you assess the danger of upsetting the current production apple cart vs. the benefits of doing a release. This ties into my previous point (and is why I'm taking things out of order :-). Experience managers _know_ that releases break things, even ones tagged as trivial bug fix releases, and they _know_ it because they've witnessed it first hand.

    You may know, in your heart of hearts, that all you did was a trivially search-n-replace. QA may be testing its heart out. But what many people will see is a "hey, a truckload of files have changed here, and this release is supposed to fix some dinky-bug that only Joan on 3 cares about". Risk & reward comes into play here, and this sort of thing tends to be a big, fat loser. As I said, you may know the whole impact of the change, you may know tons of the guts of the library in question, and be an exceptionally competent developer. You may have a great QA team. But experienced managers will come back at you and say something like this:

    "You know, 5 years ago I went with a release like this, same kind of people, top notch. And it turned out that the developer just broke up with his girlfriend and was a bit off stride, and his search'n'replace slipped. And, as Murphy would have it, the lead QA guy was off chasing fish in Barbados, and his flakey assistant handled the QA for this one and the problem slipped by him. Slipped by _everyone_, goddamn it!

    At the time it didn't seem like too big a deal - we rolled back the change and we were only down for half an hour. But you know what - come December it was right at the top of my performance evaluation."

    To bring it back to your points - it's not that I'm afraid of changes, or incompetent, or that the change has any real semantic impact. What I came to realize over the years is that an _amazing_ number of things have to go right for a release to make it into production without mishap. All it takes is one or two slipups to coincide with each other for a production release to go wrong (and sometimes, go wrong in a _ugliest_ way imaginable).

    \Trevor Cook\
    I'm not sure I agree that 5-nines defines "mission-critical". If the system is a payroll system which has data entered during business hours, and then runs a long process (even overnight or multiple days), than the business requirement is that the system performs during that period of time. If for some reason it isn't available outside that period, it still fulfills the business requirements, but it doesn't meet 5-nines. However, that system is mission-critical in my opinion, since the business can't run without it. My personal opinion is that 5-nines is very necessary in specific situations, but on a whole it is an overused buzzword which has little bearing on many real-world mission-critical apps. Also, systems which have 5-nines as a requirement fail to meet it (Amazon and Ebay as fairly prominent examples). Anyway, for the purposes of this discussion, my project doesn't have this requirement, nor does it meet it (although we're pretty close).
    \Trevor Cook\

    I think we have different definitions of five-9's. In my parlance, what you're describing is a 24/7 operation, which is a whole 'nuther kettle of wax.

    Five-9's to me applies to the hours when an application is "live", and does not include planned down time. You can have a system that runs, say, 8am to 5pm, and then goes off-line after that for end-of-day processing and then various other whoosa whatsits. That time from 5pm to 8am the next day doesn't count.

    But for 8am to 5pm, you can have an application which demands five-9's - for those 9 hours each business day, that sucker better be up, and if a component fails there better be an automated failover mechanism in place to a backup (or very fast manual if automated just isn't possible). This to me is a typical, integrated part of mission critical - when the app's up, it has to be up and stay up until the planned down time. Often, it means you limp along with a broken part or buggy module and live with the problem until 5pm (or whatever), because limping for a day is better than going dead for 20 minutes to fix it.

    --------------------

    To tie various pieces together, my reaction to a 3-week release cycle to production is due to the comments above. Each release for a mission critical app tends to involve alot of work to begin with - the machinery that churns in the background and slowly moves things from developers all the way out to the operations center (with an increasing number of people involved the closer to production you come). That effort alone seems a bit much to repeat every 3 weeks.

    Factor in a five-9's requirement, and that means that you have to be able to pull the plug on a failed release quite rapidly. This equates to more work. Plus the added testing work - you have to test that the fall back proceudre actually falls the thing back successfully!! You don't want to be in the moral equivalent of the sys admin who religiously backs up his systems to tape but forgets to check if the backups are readable....

    Then add in the concept that production releases have a tendency to break things, that every release is a destabilizing factor to the production environment, and you have even more resistance to getting just one release out - let alone one every three weeks.

    Take all of the above, roll it into a ball, chew on it for a while - and a 3 week release cycle just isn't achievable. The only way you can achieve it is to _not_ do some of the above.

    Which begs the question - if, as you say, this system is such that if it breaks
    "the company can't function", why aren't you doing the above work? Or if you are, how are you doing all of that work in 3 week cycles over and over again (plus the development work!).

         -Mike
  67. <Mike Spille>
    My view of release doesn't cover only code, but the code plus all of the various paths/processes involved in migrating a system from development out to production. In the places I've worked on mission critical systems, each time a release was made in production, it was considered a destabilizing event. This has nothing to do with software theory, or "I only changed three lines of code to fix a bug", or the competence of the processes or the people involved. It's based on hard, cold experience. To grizzled, veteran managers, releases tend to break things. Code is just one aspect, where a developer may introduce a bug, subtle or blatant. Problems can also crop up in configuration, environment, pilot error in installing the release, upgrading persistent data stores, interactions with external systems, etc. This is a partial list, but you get the idea.

    Where I work now, releases are considered a really, really destabilizing event. Each release is required to be able to work in parallel with the previous release. Releases are phased in slowly with one server/one client (or what's appropriate) and slowly pushed out to an ever widening audience if all goes OK - and with the ability to pull the plug on the new release and revert everything back to the parallel running older stuff on a moment's notice.

    This is, I admit, a rather draconian approach, and it shows how there can be varying levels of "mission critical". But the reasoning behind it is sound for any system with any serious level of "critical". Releases tend to break things.
    </Mike Spille>

    I think part of the issue here is scale, and I think it's safe to say what your describing is on a far larger scale than what I work on. I still believe that the "mission-critical" part is equal, it's more a matter of size (both of application/code and development team/process). Even XP proponents acknowledge that there is a limit to team size before it starts to break down (usually somewhere between 20-100 developers).

    Our releases are also "prepackaged" so there is almost no chance of things going wrong. Code, config, db changes, etc. are all put into ant scripts so that the "pilot error" is eliminated (you just type ant for the release). We also do release test runs against a replicated server to make sure that everything goes smoothly before the real thing. And do to our smaller scale, the risk of a bad release is almost non-existant. Ignoring "prep-time", an actual release usually takes less than 10 minutes, and restoring to previous state takes about 5. Worst-case scenario is no release and everything is down for 15 minutes.

    <Mike Spille>
    Mission critical systems tend to have a risk analysis for each planned release into production. "Risk analysis" here doesn't have to be a fancy process - it may be an informal conversation over a cup of coffee between the developers and a manager, or it could be a full-blown committee/report deal. In either case, if you really are talking mission-critical, you assess the danger of upsetting the current production apple cart vs. the benefits of doing a release. This ties into my previous point (and is why I'm taking things out of order :-). Experience managers _know_ that releases break things, even ones tagged as trivial bug fix releases, and they _know_ it because they've witnessed it first hand.

    You may know, in your heart of hearts, that all you did was a trivially search-n-replace. QA may be testing its heart out. But what many people will see is a "hey, a truckload of files have changed here, and this release is supposed to fix some dinky-bug that only Joan on 3 cares about". Risk & reward comes into play here, and this sort of thing tends to be a big, fat loser. As I said, you may know the whole impact of the change, you may know tons of the guts of the library in question, and be an exceptionally competent developer. You may have a great QA team. But experienced managers will come back at you and say something like this:

    "You know, 5 years ago I went with a release like this, same kind of people, top notch. And it turned out that the developer just broke up with his girlfriend and was a bit off stride, and his search'n'replace slipped. And, as Murphy would have it, the lead QA guy was off chasing fish in Barbados, and his flakey assistant handled the QA for this one and the problem slipped by him. Slipped by _everyone_, goddamn it!

    At the time it didn't seem like too big a deal - we rolled back the change and we were only down for half an hour. But you know what - come December it was right at the top of my performance evaluation."

    To bring it back to your points - it's not that I'm afraid of changes, or incompetent, or that the change has any real semantic impact. What I came to realize over the years is that an _amazing_ number of things have to go right for a release to make it into production without mishap. All it takes is one or two slipups to coincide with each other for a production release to go wrong (and sometimes, go wrong in a _ugliest_ way imaginable).
    </Mike Spille>

    My manager came from a history with mainframe/AS400 systems, so 3 years ago it would take weeks to get a release authorized (just like you describe). He had those memories of horror stories, so was very "restrictive" and considered the moves "risky". However, our development process has worked the way it's supposed to and he has seen the smooth releases. We have finally replaced his horror stories with success stories so that he accepts/allows the 3-week releases. This doesn't mean no risk analysis, simply that the risk is very low so the reward of either bug fixes or new features is almost always greater (since there is little risk of failure, and that only costs around 15 min).

    <Mike Spille>
    I think we have different definitions of five-9's. In my parlance, what you're describing is a 24/7 operation, which is a whole 'nuther kettle of wax.

    Five-9's to me applies to the hours when an application is "live", and does not include planned down time. You can have a system that runs, say, 8am to 5pm, and then goes off-line after that for end-of-day processing and then various other whoosa whatsits. That time from 5pm to 8am the next day doesn't count.

    But for 8am to 5pm, you can have an application which demands five-9's - for those 9 hours each business day, that sucker better be up, and if a component fails there better be an automated failover mechanism in place to a backup (or very fast manual if automated just isn't possible). This to me is a typical, integrated part of mission critical - when the app's up, it has to be up and stay up until the planned down time. Often, it means you limp along with a broken part or buggy module and live with the problem until 5pm (or whatever), because limping for a day is better than going dead for 20 minutes to fix it.
    </Mike Spille>

    The definition of "5-nines" I've read generally boils down to theoretical forecasting and "3 min down every year", which is totally beyond us (especially with an iterative release cycle :) ). We follow what your describing, it must be up when it's supposed to be and can't fail.

    <Mike Spille>
    To tie various pieces together, my reaction to a 3-week release cycle to production is due to the comments above. Each release for a mission critical app tends to involve alot of work to begin with - the machinery that churns in the background and slowly moves things from developers all the way out to the operations center (with an increasing number of people involved the closer to production you come). That effort alone seems a bit much to repeat every 3 weeks.

    Factor in a five-9's requirement, and that means that you have to be able to pull the plug on a failed release quite rapidly. This equates to more work. Plus the added testing work - you have to test that the fall back proceudre actually falls the thing back successfully!! You don't want to be in the moral equivalent of the sys admin who religiously backs up his systems to tape but forgets to check if the backups are readable....

    Then add in the concept that production releases have a tendency to break things, that every release is a destabilizing factor to the production environment, and you have even more resistance to getting just one release out - let alone one every three weeks.

    Take all of the above, roll it into a ball, chew on it for a while - and a 3 week release cycle just isn't achievable. The only way you can achieve it is to _not_ do some of the above.
    </Mike Spille>

    Again scale and development processes. We're small enough (and have proof from previous success) that we don't need all that overhead. We still test constantly (both automated and QA), but we perform this testing daily, and the deployment scripts are also updated whenever required. We are literally always ready to deploy so a release mainly involves making a decision that "this is a good time".

    I think your statement would be more correct as "a 3 week release cycle isn't achievable on a large scale". (Before I get flamed by my own "camp", I don't know this to be true, but I respect Mike's opinion/experience enough to accept it as true. Until I work at that scale, I won't pass judgement or make any claims. If anyone has any experience with large-scale iterative/XP development, either good or bad, I'd love to hear about it.)

    <Mike Spille>
    Which begs the question - if, as you say, this system is such that if it breaks "the company can't function", why aren't you doing the above work? Or if you are, how are you doing all of that work in 3 week cycles over and over again (plus the development work!).
    </Mike Spille>
    I'm sounding like a parrot, but again it all boils down to scale. We can achieve all these requirements you've outlined while still releasing quickly. I think we've found a common point-of reference though (correct me if I'm wrong). We're both working on "enterpise mission-critical" apps, the difference is the size.

    And with that said, I'm honestly curious how "size" of an app is defined. Even at a basic level, there are small/medium/large/mammoth companies. I work for medium-size clients, and it sounds like you work for something large/mammoth. How do you clearly identify that in forums like this, since it obviously causes confusion, and since the "right answer" is different depending on the scale (we both replicate across multiple servers, but when scale is considered that doesn't necessarily mean we're doing the same thing).

    Trevor
  68. People are using Spring, this is clear. Not many, but some are. Some are claiming that they're using Spring for mission critical software that has been deployed in production - I'm impressed, since the first recorded downloads were from less than 4 months ago (must be some mission critical app to be developed and deployed in under 4 months). I understand that many shops can play fast and loose with what they use. I even understand that there are people who don't mind a new upgrade of a package literally breaking every file that uses it.


    1. Those people have used CVS snapshots before, thus the short timeframe since the first public release.
    2. Literally breaking every file... Well, Hibernate has done that too; no one except you ever complains about well-announced root package renaming.
    3. I can understand that Trevor and co feel offended!

    > As for "We are not full time framework builders", my response is the same it has always been: _I do not care_. Your employment status has no bearing on whether or not the product you are pushing (and pushing aggressively at that) is relevant to people's needs.

    My remark was meant to show that we are using the framework ourselves in production apps. Of course the employment status is not the important point! You are a master of deconstructing others' remarks...

    > Thanks for the clarification. And if you want to know why I become a bit exasperated, note how your posts have now changed from originally saying that you're not a J2EE alternative to say you are an alternative ("to *certain* J2EE services"). Also keep in mind on the exasperation front that a thread about the architecture of EBay using J2EE was turned into a Spring advertising campaign.

    You're deconstructing again. We are not a J2EE alternative; that would imply being an alternative to the whole of J2EE (like .NET). I've *not* changed the direction of my posts. We offer alternatives for *certain* J2EE services, so what? Velocity is an alternative to JSP, and no one ever calls them "J2EE alternative".

    I just dropped in a remark on Spring serving as backbone for a colocated architecture. The thread had turned to a remote-session-facade-or-not debate already. Over and out.

    Juergen
  69. \Juergen Hoeller\
    [various points on people using CVS snapshots, breakage, and Trevor's offense]
    \Juergen Hoeller\

    This is the simplest way I can explain my own viewpoint: using CVS snapshots? In a mission critical application?

    \Juergen Hoeller\
    My remark was meant to show that we are using the framework ourselves in production apps. Of course the employment status is not the important point! You are a master of deconstructing others' remarks.
    \Juergen Hoeller\

    I can only read what is written. It's a fact (sad, but true) that all you have to go on in a forum like this is what people put down in writing. The subtle nuances of personal interaction don't come into play, and all you have is the words.

    \Juergen Hoeller\
    You're deconstructing again. We are not a J2EE alternative; that would imply being an alternative to the whole of J2EE (like .NET). I've *not* changed the direction of my posts. We offer alternatives for *certain* J2EE services, so what? Velocity is an alternative to JSP, and no one ever calls them "J2EE alternative".

    I just dropped in a remark on Spring serving as backbone for a colocated architecture. The thread had turned to a remote-session-facade-or-not debate already. Over and out.
    \Juergen Hoeller\

    Here's the opening of your first post on this topic:

    \Juergen Hoeller\
    I second your analysis. Load balancing at the remote SLSB only adds value if there's very uneven distribution in terms of workload; else, clustering web servers and/or databases is good enough. I prefer to stick to Fowler's First Law of Distributed Computing: "Don't distribute your objects". This is why many people choose A with local EJBs; the only real benefit that they want from SLSBs is the declarative transaction management.

    Note that B can have declarative transactions too, just not via EJB: The Spring Framework supports declarative transactions via its AOP framework, both on JTA and other transaction strategies. This is extremely lightweight; it even works in plain Tomcat with a locally managed DataSource. I strongly believe that this is a viable and convenient alternative to local SLSBs.
    \Juergen Hoeller\

    The rest of the post is one big giant advertisement for Spring. And please note your closing sentence from the above quoted part, all caps added for emphasis:

    "I STRONGLY BELIEVE THAT THIS IS A VIABLE AND CONVENIENT ALTERNATIVE TO LOCAL SLSBs"

    So are ya still wondering where I'm getting the "alternative" idea from? Do you see, perhaps, why your posts don't look quite so innocent and doe-eyed from where I sit?

        -Mike

  70. > Incidentally, I just rechecked the latest Spring stuff, and found to my surprise that every damn piece of it has a different package name. You guys just completely broke every user's source code and you're advocating it as a alternative to something standard like J2EE? This is the sort of open source playing around that just drives people away in droves.
    >
    > -Mike

    I don't think this is a problem that just applies to open source projects - remember "com.sun.java.swing"? Sometimes you have to break backwards compatibility, and if you can do it while you are in beta, then that is a so much better than doing it after you release 1.0.

    Thomas
  71. \Thomas Risberg\
    I don't think this is a problem that just applies to open source projects - remember "com.sun.java.swing"? Sometimes you have to break backwards compatibility, and if you can do it while you are in beta, then that is a so much better than doing it after you release 1.0.
    \Thomas Risberg\

    Spring has been presented as a viable architectural way to go, and as a possible alternative to using something like SLSBs. Ron and Juergen have repeatedly talked about the stability of the product, it's applicability to a wide range of problems, and its wide feature set. When people point out that it's pretty immature, their response is (to paraphrase) that it may be young but not immature.

    So on the one hand, Spring is touted as a solution to a wide range of problems, and is repeatedly "advertised" on forums like this. It is pushed as a solution to serious problems. On the other hand, when pressed they claim youth, and others say "well, it's beta".

    It's yet another attempt at a fledgling open source product trying to have the best of both worlds.

        -Mike
  72. Mike: It's yet another attempt at a fledgling open source product trying to have the best of both worlds.

    (Disclaimer: I've never used Spring. I don't want in on the debate.)

    Mike, you need a blog! Seriously. I mean, if you are going to do the work to learn all these products and understand their strengths and weaknesses and write up your results, you should set up your own blog to post your results. I'd read it.

    Take this as a compliment .. it seems that you are the Ralph Nader (or the Consumer Reports) for all the commercial products and open source projects that get covered by TSS. Any claims too big are cut down to size. (I'm going to have to watch my own claims closely from now on. See new sig below. ;-)

    Join the rebellion at freeroller.net .. oops, I mean come over to the dark side at jRoller.com and set up your own blog!

    Peace,

    Cameron Purdy
    Tangosol, Inc.
    Coherence: It may actually do lots of cool stuff, but I have to double-check before I make any outlandish marketing claims because Mike Spille is watching.
  73. Mike,

    Maybe it's a misunderstanding - when Rod and Juergen claim that the code is stable, they mean that it has been tested in various scenarios for quite some time and that it does work. It seems that you interpret it as the API is stable and won't change before we get to 1.0, which I doubt that anybody has ever promised.

    Thomas

    > Spring has been presented as a viable architectural way to go, and as a possible alternative to using something like SLSBs. Ron and Juergen have repeatedly talked about the stability of the product, it's applicability to a wide range of problems, and its wide feature set. When people point out that it's pretty immature, their response is (to paraphrase) that it may be young but not immature.
    >
    > So on the one hand, Spring is touted as a solution to a wide range of problems, and is repeatedly "advertised" on forums like this. It is pushed as a solution to serious problems. On the other hand, when pressed they claim youth, and others say "well, it's beta".
    >
    > It's yet another attempt at a fledgling open source product trying to have the best of both worlds.
    >
  74. True, the [Spring] codebase is pretty new. It's based on Rod's framework from "Expert J2EE Design and Development" though, so it's already more than 1 year old in total.
    I've successfully used the core concepts since early 2000 in several large commercial projects. So the heritage is over 3 years. Certainly I wouldn't claim it's as mature as WebLogic :-) However, users comment that it's stable, we heavily emphasize testing and there are an increasing number of applications in production. I'm more than happy for its quality to be measured by users' experience.
  75. I agree that too many places have used EJBs in general, and J2EE at an even higher generalization level, as a one-size-fits-all solution. And played a terrible price for over assuming.

    But at the same time, alot of people have overreacted to such problems and have swung themselves 180 degrees in the opposite direction, which is just as bad (and possibly even worse). What I advocate, and have been advocating for over a decade, is to know your technologies, and make your decisions based on your needs and what the tech is capable of. People who did due-diligence 3 or 4 years ago knew where the holes in J2EE were way back then, and weren't caught out by inappropriate technology choices. People who believed the hype, or just used something because it was there or looked cool, played Russian roulette with their projects.

    Your points are well taken, but should also be taken with a grain of salt. For one, there's a strong implication in your arguments that stateless session beans have some massive overhead (in terms of development and in terms of runtime) that frankly do not exist or come into play for many people. Alot of is people subconsciously (or consciously :-) taking Entity Bean problems and applying them to SLSBs. In real use, it takes a couple of days to understand what an SLSB is and how to create one. The biggest burden for developers is a some extra boilerplate code. And in real use on many projects, it's not a performance bottleneck either.

    Spring may be superior, but it's still not standard, and I think on large projects you'd see no substantial development gains from using either approach - large projects don't get stuck up on SLSB problems, but usually on meatier issues.

    \Juergen Hoeller\
    True, the codebase is pretty new. It's based on Rod's framework from "Expert J2EE Design and Development" though, so it's already more than 1 year old in total. But agreed, it's not "proven" technology yet. Regarding lock-in: Well, there will be some lock-in with any technology. But applying IoC principles very consequently, Spring does a good job in keeping framework dependencies at a minimum (not none at all, of course).
      
    Regarding the EBay example: I expect the effort of migrating a Spring-based solution to something else to be significantly less than migrating an EJB-based solution to a non-EJB one. Of course, a standard mechanism like EJB allows to migrate from one EJB container to another; but migrating the architecture is a completely different thing.
    \Juergen Hoeller\

    Please don't say "EJB" - let's throw out entity beans for this discussion.

    Leaving just SLSB's - if your architecture really needs the flexibility of remote access (or even the potential), then one way or another you're going to need remote invocations. And using SLSB is about the best way over all to do that, and standard and portable to boot.

    If you're just throwing out an SLSB codebase and going to something else entirely, chances are somebody screwed the pooch pretty bad and the entire architecture is rotten.

    I think it would be far more likely for someone to need to switch out Spring than to switch out an SLSB layer, if only because Spring just doesn't give you certainly capabilities. And because it's unique, this cannot be easy. There are a host of app servers to try out if you run into SLSB problems with one vendor. There's no one to go to if Spring starts giving you growing pains.

    The point here isn't to bash Spring - it looks quite nice for certain uses. Rather, my point is that SLSBs are standard, and aren't quite as costly as you imply, and they give you a certain flexibility. Because of this, it's going to be attractive to a large number of people, and rightly so. And don't denigrate people's fear of lock-in - it's just as problematic in OpenSource as it is in proprietary systems. Beyond the newness - which is very real - of Spring, it does lock you in. And to a large number of people this is a significant risk that needs to be taken into account.

         -Mike
  76. Your points are well taken, but should also be taken with a grain of salt. For one, there's a strong implication in your arguments that stateless session beans have some massive overhead (in terms of development and in terms of runtime) that frankly do not exist or come into play for many people.


    Agreed, there's no massive overhead with local SLSBs in terms of performance. There is in terms of development though; you can make it easier with tool support but there's still an extra descriptor and a deployment step involved. This is not as negligible as you suggest either :-)

    > Spring may be superior, but it's still not standard, and I think on large projects you'd see no substantial development gains from using either approach - large projects don't get stuck up on SLSB problems, but usually on meatier issues.

    For many typical J2EE web applications, Spring allows you a very convenient development style completely without an EJB container. This means the option to develop on and deploy to J2EE web application servers like Tomcat or Resin. They are not only cheaper but, even more importantly, far simpler to handle than big iron servers.

    > Leaving just SLSB's - if your architecture really needs the flexibility of remote access (or even the potential), then one way or another you're going to need remote invocations. And using SLSB is about the best way over all to do that, and standard and portable to boot.

    But many application will *never* need distribution at the component level. If you really need remoting at a later stage, you can always refactor your system accordingly - it's not trivial but certainly manageable.

    > I think it would be far more likely for someone to need to switch out Spring than to switch out an SLSB layer, if only because Spring just doesn't give you certainly capabilities. And because it's unique, this cannot be easy. There are a host of app servers to try out if you run into SLSB problems with one vendor. There's no one to go to if Spring starts giving you growing pains.

    Due to the very nature of the framework, there can hardly be an issue that would make you want switch out Spring completely. There might be a certain feature that it doesn't offer: Feel free to use something else that gives you the desired feature, be it SLSBs or whatever. There might be a Spring feature that doesn't work: Then use an alternative for the time being. Spring provides glue for all kinds of tools, it's very easy to integrate other solutions.

    > And don't denigrate people's fear of lock-in - it's just as problematic in OpenSource as it is in proprietary systems. Beyond the newness - which is very real - of Spring, it does lock you in. And to a large number of people this is a significant risk that needs to be taken into account.

    Of course there's a lock-in! There's a lock-in with *everything*, be Struts or Hibernate. Please don't compare Spring with SLSBs: They are completely different things. Spring is an application framework that happens to offer a certain feature than competes with local SLSBs, but it is so much more than just that single feature. As I already said, it allows you to use SLSBs for transaction management without any hassle -- if you choose that way.

    Juergen
  77. \Juergen Hoeller\
    Agreed, there's no massive overhead with local SLSBs in terms of performance. There is in terms of development though; you can make it easier with tool support but there's still an extra descriptor and a deployment step involved. This is not as negligible as you suggest either :-)
    \Juergen Hoeller\

    Actually, in my experience it is negligible. An extra descriptor and deployment step - oh me oh my!!

    Sorry for the sarcasm, but it just isn't all that big of a deal for me. Enterprise projects generally have much bigger fish to fry.

    \Juergen Hoeller\
    For many typical J2EE web applications, Spring allows you a very convenient development style completely without an EJB container. This means the option to develop on and deploy to J2EE web application servers like Tomcat or Resin. They are not only cheaper but, even more importantly, far simpler to handle than big iron servers.
    \Juergen Hoeller\

    No argument there.

    \Juergen Hoeller\
    But many application will *never* need distribution at the component level. If you really need remoting at a later stage, you can always refactor your system accordingly - it's not trivial but certainly manageable.
    \Juergen Hoeller\

    I think part of the problem here the assumption of just how many applications never need it. I believe it's somewhat higher than perhaps others do.

    And again - if you happen to be working in an app server environment (and I wouldn't call this uncommon :-) SLSBs are there, and they're standard, and they don't have a significant performance cost if you're not remoting. Again, people keep lumping SLSBs into the Entity bean morass, and SLSBs are to some extent unnecessarily demonized.

    \Juergen Hoeller\
    Due to the very nature of the framework, there can hardly be an issue that would make you want switch out Spring completely. There might be a certain feature that it doesn't offer: Feel free to use something else that gives you the desired feature, be it SLSBs or whatever. There might be a Spring feature that doesn't work: Then use an alternative for the time being. Spring provides glue for all kinds of tools, it's very easy to integrate other solutions.
    \Juergen Hoeller\

    Forgive me, but this is a rather Spring-centric viewpoint. Please re-read the above from the standpoint of an objective viewer, someone not invested in the outcome, and it seems a bit silly.

    \Juergen Hoeller\
    Of course there's a lock-in! There's a lock-in with *everything*, be Struts or Hibernate.
    \Juergen Hoeller\

    Last time I checked there's no lock-in in using SLSBs. They are quite widely implemented. :-)

    \Juergen Hoeller\
     Please don't compare Spring with SLSBs: They are completely different things. Spring is an application framework that happens to offer a certain feature than competes with local SLSBs, but it is so much more than just that single feature. As I already said, it allows you to use SLSBs for transaction management without any hassle -- if you choose that way.
    \Juergen Hoeller\

    If you recall, this mini-thread started with you saying, to paraphrase, "why use SLSBs, why not use Spring instead". The thread of all of your arguments are using Spring in place of SLSBs and the benefits you derive from doing this.

    So if I'm comparing the two it's only because you started it :-)

        -Mike
  78. hi all!

    1) ad IoC and Spring: I didn't know Spring and find its use of AOP for transaction demarcation intuitive and very interesting. Of course, as with all technologies, i'd carefully evaluate (not just technically) whether it's wise to actually use it in a project: if it's just to avoid the "hassle" of packaging a few local SLSBs with declarative transactions, then it might not be worth it...

    2) ad remote SLSBs: The consensus seems to be that the benefit of additional load-balancing between web-tier and EJB-tier only pays off if you want to explicitly restrict a few well-known resource-intensive SLSBs to run on dedicated machines. (Did anybody ever do this?) On the other hand, simple "random" (e.g. round-robin) load-balancing between web-tier and a EJB-tier (of identical machines running identical SLSBs) doesn't seem to provide any advantage over co-locating the service layer (as POJOs or local SLSBs) with the web-tier (and doing "random" load-balancing when accessing the web-tier).

    3) for remote SLSBs: Do you consider the additional effort of having to create a layer of Data Transfer Objects (DTOs; Value Objects) to be serialised between web-tier and EJB-tier a nuisance? Are your DTOs identical to you persistent classes (so-called "domain DTOs") or did you ever go the additional mile of hand-crafting your DTOs to perfectly match each use-case (so-called "custom DTOs") (I didn't)? (For "domain DTOs", Hibernates ability to detach a persistent object from a transaction, use it as a DTO (by passing it to the web-tier), and re-attach it to a new transaction (to persist its changes to the database) strikes me as especially useful!)

      all the best,
      gerald
  79. It didn't use entity beans[ Go to top ]

    AFAIK, they only used SLSBs for the remote access (object distribution) and probably also for the transaction demarcation. I suspect declarative EJB security was not used, nor "resource management". Clustering would necessarily be used, and a collocated design does not impede that. Declarative transactions are useful, but only slightly more convenient to use than the alternative. And object distribution, well this is my main question: what do they gained by doing it in this app?


    You realize resource management also includes instance pooling and activation/passivation. The container can do a lot of that transparently, when its managing things itself.

    Sandeep.
  80. use of SLSB[ Go to top ]

    i got sick of having to boot up all of my DAOs + hibernate mappings when I was trying to make changes to my portlet ... then I thought why not use SLSB as the transport from JVM to JVM ... i spend most of my time rebooting app server on the view(s) rather then the model anyhow ..

    added benefit(s) scalablity ... clustering etc ...
  81. After reading the interview I still don't know in what areas J2EE beat .Net, other than the vauge notion of being more "scalable".

    If the application _was_ infact stateless and used a completely custom persistence model, then there cannot have been much between the two models, surely? Both should have theoretically scaled equally well, so why didn't they in practice?

    Can we get more information on what areas J2EE and in particular WebSphere outperformed the competition.

    Michael.
  82. I am curious to know, that if the eBay rearchitecture effort resulted in any dramatic change in the hardware (less number of servers) etc. for the application to run.