Discussions

News: Article: Separate Business Logic from Components

  1. Article: Separate Business Logic from Components (17 messages)

    Nathan Abramson and Allan Rubinoff discuss an approach that uses code generation to separate your business logic, from the type of component that it ends up in (EJB, Servlet, Taglib, JAX-RPC, etc). They call this approach "multiface coding".

    Read more at JavaPro's website

    Threaded Messages (17)

  2. The idea seems to be good. But when you talk about implicit object such as HttpSession, which is specific to servlet/jsp then you are restricting the business objects(BO) with a specific technology. In real world i don't think it's 100% possible to split the BO from the technology.

    Cheers
    Zulfi
  3. Zulfi:

    You are right, but it all depends on the level of abstraction and the framework you are using. Although this comment might be a little bit off topic I will post it anyway ;).


    At some stage we decided that we need a way more flexible portal framework than we could find on the market. Therefore we started to develop a multi protocol framework (just for our own purposes) that tried to separate the used request technology from the process flow and the business logic as much as possible.

    The first 3 revisions of the framework used a generic request/response model and made heavy use of factories, wrapping objects etc. to hide implemenation details - which in the end was a nice try but did not work at all. Reasons were overhead, the need to access the specific technology at some stage in the process flow, high complexity, the need of project specific extensions to the framework and much more.

    Since we still loved the idea we started from scratch and took care of all lessons learned.

    A few points about the concept:

    Basic Idea is, well, Model 2.

    Entry Points (such as servlets for Browser/WML/SOAP Requests, JMS Listeners etc) translate requests to the framework into a generic ProcessingContext which provides access to all request and required session parameters and an environment object to allow access to protocol specific details later on.

    The request processing is handled by a stack of stateless components called smartlets somehow close to the interceptor stack pattern. These smarlets implement coarse grained logiv and make use of a service layer which provides the fine grained business logic. Over the time a lot of generic smartlets were introduced: Renderer (JSP, CSV, XML, SOAP ...), Validation, Authorization, Authentication Smartlets (usage is declarative) and a generic way to process the business logic within a remote or local EJB (just a configuration issue, possible at any stage in the process flow). The results are clear separation of concerns, high level of code reuse and high flexibility in request processing and - which was really important to us - low complexitity. (Implementing a HttpStatusCodeRenderer is not that much different from a implementing a LoginSmartlet.)

    Additionally we introduced abstraction layers to all areas of former concerns (Session Management, SOAP Implementations, JMX Infrastructure...). Example: Our abstract session management moved the implementation of a multi protocol (wml/soap) single sign on in loosly coupled environment to a matter of finding the right distribution technology for a clustered environment.

    All I can say is that it really works, does not mean overhead at all and does not require to forget all your OO paradigms.

    Jens

    PS: We don't believe in generated code at all.
  4. Ha - I forgot the important part;)

    What we finally achieved is that implementing some business logic apart from the generic smartlets does not depend on the used request protocol nor on the execution tier (servlet,local,remote sb), it just depends on the stack configuration of the requested operation. Therefore an developer can easily concentrate on his primary task - satisfy requirements by reusing as much code as possible.
  5. Is called: (Request)normalization
  6. Is called: (Request)normalization
  7. Why does the advent of J2EE suddenly require us to throw out everything we know about object-oriented design?

    The article states:

    This business method might take a ShoppingCart object, apply all the proper discounts and taxes, and return a total cost.


    Why wouldn't we want our ShoppingCart object to have a getTotalCost() method? Isn't this the way we design good objects?

    I think the article's warning about the dangers of intertwining business logic with the servlet, EJB, etc. is a good one. However, this throwback to functional programming is not the answer. The effort to write a servlet, EJB, etc. that wraps such a ShoppingCart object would be trivial. If you want a code generator-- well, I'm as big a fan of laziness as the next guy. But don't tell me that to write J2EE I need to abandon good OO design!
  8. Having a getTotalCast() method in the ShoppingCart class or not has nothing to do with using OO or not. It all depends on what you want to model using OO.

    Take the shopping (cart) basket example again. In my local shop, they have cash registers. The shop doesn't ask my shopping basket for the total cost. Instead, they feed the product references to the cash register who will calculate the total cost.

    Depending on what a shopping cart is in your world, you model if differently.

    I would even vote in favor of not having a getTotalCost method in the ShoppingCart since having the method somewhere else, gives you much more flexibility. A shopping cart is what it is: it holds products and nothing more.
  9. writing to a file ...[ Go to top ]

    <quote> EJBs restrict the operations that a method can perform. EJBs are not allowed to perform some operations, such as accessing the file system.
    If a business method needs to access the file system, it can be wrapped by a JSP tag or a Web service, but not by an EJB. <quote>

    Although EJB/J2EE builds on top of J2SE, certain programming restrictions ( eg. use of static non- finals, synchronization, passing of this reference as a parameter, return of a this reference, etc.)apply to EJB components. These restrictions ALSO apply to the helper /utility classes that the EJB may depend on. Most of these restrictions acknowledge & address the fact that the J2EE environment is potentially multi-server/ multi-JVM... Not writing to a file system is really a very trivial restriction, that cant cause code malfunction unlike the other restrictions which may have unpredicable effects. (Even if the code to access the file system were in a separate non ejb class, due to the potentially distributed nature of the environment, invoking that class' method from a servlet/ jsp in a CLUSTERED enviroment will cause the file contents to be distributed into multiple files on different physical machines, much like for the ejb & that was the intent of this restriction)

    EJB's methods are intended to be large/course grained operations, typically requiring access to one or more back end systems, as well as transaction management, security, etc.

    Generally it is easy to identify operations that are best placed in a session bean. Sessions beans are pooled for better performance.

    When deciding whether to break some logic into a separate helper /utility classes, the factors that come into play include API level re-usability & code metrics.

    just my random 2 bits of though stream
  10. <deeperquote>
     
    <quote> EJBs restrict the operations that a method can perform. EJBs are not allowed to perform some operations, such as accessing the file system.
    If a business method needs to access the file system, it can be wrapped by a JSP tag or a Web service, but not by an EJB. <quote>

    Although EJB/J2EE builds on top of J2SE, certain programming restrictions ( eg. use of static non- finals, synchronization, passing of this reference as a parameter, return of a this reference, etc.)apply to EJB components. These restrictions ALSO apply to the helper /utility classes that the EJB may depend on. Most of these restrictions acknowledge & address the fact that the J2EE environment is potentially multi-server/ multi-JVM...

    </deeperquote>

    Which just proofes something I said (more or less) above:

    Talking about separation should always include "separation of concerns". Always. This is also one reason why we don't like code generation and wizzard based development - people just don't use their brains anymore to understand the implications of their decisions. We introduced fine and coarse grained business logic to our developers therefore. The focus of the fine grained business logic is simple: Provide a stable interface to the service to improve maintainability and reusability by using the features you get from the technolgy used by (one possible) implementation without releasing implementation details to the user of the service. On the other hand coarse grained business logic can and should be separated from the used technology - as I mentioned above.

    Jens
  11. [Also available with links at http://beust.com/weblog]

    Nathan Abramson wrote an article about Multiface programming. The idea is to ease J2EE programming by putting the focus back on business methods. Abramson notices that whether you are developing an EJB or a tag library, the underlying technology you are using tends to be intrusive and to impact your business methods, making it hard to reuse your code.

    To solve this problem, Nathan offers to add a XML files to your Java components that describe the semantics relevant to the technology you are using, hence allowing you to leave your business code unpolluted.

    While this is a very laudable goal, I believe Nathan's approach is seriously flawed in many areas.

        * Yet another XML language.

          Let's face it... We are all getting sick and tired of XML, aren't we? While I realize its usefulness, I am a Java developer at heart, and the less angle brackets I see in a day, the better I feel.
           
        * Multiface programming can't be totally transparent.

          No matter how hard you try, your business code cannot work in a vacuum. You need to know what's going on around you, what exceptions you can throw, what resources are available, etc... I believe Multiface programming will only work well in very simple cases. And for more complicated ones, the companion XML will become so intricate that we will wonder if we gained anything at all.
           
        * Code generators are hard to write.

          And that's something that Nathan emphasizes himself in his article. It's already hard to imagine a generator that would cover, say, both EJB's and Tag Libraries, but try to imagine what it must be like when you need to talk to a particular implementation of said technologies. One generator for WebLogic, another for JBoss, another for TomCat, another for Resin, etc...
           
        * No flexibility in validation.

          The XML file could contain support for simplistic validation such as "<disallowed-value>", as illustrated in the article, but we all know that if you want to be flexible, it must be possible to write the validation logic in Java. So the XML file will point back to Java eventually. And we have gone full circle.
           

    But I guess what disturbs me most in Nathan's approach is that he fails to mention two very valid technologies that have been gaining increasing momentum these past years, and that seem to be a perfect match to solve the problem at hand:

        * Aspect-Oriented Programming
        * Metadata programming, as illustrated by tools such as EJBGen and XDoclet

    These two approaches are well established, have proven implementations and they have one huge advantage over Multiface programming: they are centered around Java, not XML.
  12. I agree with Cedric 100%!

    In such a case I would prefer to go one abstraction layer higher: MDA (Model Driven Architecture).

    LoDe.
    http://openuss.sourceforge.net
  13. XML ...[ Go to top ]

    <quote>

    * Yet another XML language.

    Let's face it... We are all getting sick and tired of XML, aren't we? While I realize its usefulness, I am a Java developer at heart, and the less angle brackets I see in a day, the better I feel.

    </quote>

    Hopefully a few more people will realize this and the fact that the usage of multiple (descriptive) languages at the same time adds complexity, even by using java and one or more xml/other text formats only.
  14. It appears to me that the authors of the cited article are really talking about container independence. In this context, "container" means "component infrastructure", such that a J2EE appserver is one container, COM+ is another kind of container, etc.

    Everything seems to live in some sort of container these days, so that the particular nature of the interface between the container and the things contained comes gradually to color our outlook on how to implement things. It should not. Business rules should be invariant across container (WebLogic vs. WebSphere etc.) and container-kind (J2EE vs. COM+ vs. .NET etc.)

    If you follow me this far, then container independence would be a Good Thing. It is also a prime candidate for automation. Such automation is most effectively done by means of generated code to wrap and supplement the "real" code that we write. I don't feel like it is a good use of my time to cope with container issues; do you?

    And as for the generated code itself, I don't care whether it is in Java or XML or Perl or C or whatever, so long as it is correct and not a bottleneck. I also don't care how bulky it is. If it needs to dwarf the "real" code by a factor of 100 to get the job done, then so be it. I'm doing what I'm good at, and the machine is doing what it's good at.

    I'm nowhere near so sick of looking at XML as I am of having to think in terms of containers and their manifold idosyncrasies.
  15. <quote>
    If you follow me this far, then container independence would be a Good Thing. It is also a prime candidate for automation. Such automation is most effectively done by means of generated code to wrap and supplement the "real" code that we write. I don't feel like it is a good use of my time to cope with container issues; do you?
    </quote>

    Yes, and MDA is the way to go and not building another "container independent" frameworks...

    LoDe
    http://openuss.sourceforge.net
  16. Well, yes, but once you have done your MDA, then what do you deploy your artifacts into, and how? (We are really talking about the same thing here.)
  17. I'm a bit pragmatic in this area. For me the situation is just the same as 15 years ago: This is all about the communication between human and maschine. At the end I have to implement the code with one or many languages (the medium to communicate) to tell my maschine to do what I want. For this purpose I can use these main components (just like to communicate something with other persons):

    - Textual description
    - Picture/diagram
    - Voice

    At the moment we as developers only use textual description (code) and diagram (UML and other diagrams) in our language (Java, XML and others).

    As you can see within the evolution of "normal" language (e.g. english), we use those main components above all together to communicate. So at the end we will always have these components, which are never changed:

    - Textual representation -> our code. At the moment we are writing the code with Java. The abstraction level of our textual representation is surely getting higher and higher (see ML, assembler, C, C++, Java). The next step could be Java with many components within all different domains. So for me the language itself has to be improved to reach a higher position of the abstraction and not building a new framework. This is slow (language evolution and not revolution) but sure. Actually all of us have to work on the language specification ;-)
    - Picture/diagram -> At the moment UML.
    - Voice -> Not yet.

    The combination of those 3 main components can help us to write better applications. Surely they have to be in good proportion. You cannot describe everything with diagrams or voices. It's sometimes clearer to have text. On the other side, wouldn't be very informativ to have some points in voice?

    At the moment UML tries to be a general language (not only diagrams but also textual representation). So for me this is a top down action:
    - Textual representation -> UML. The problem here is that UML has to bind other programming language to become a full language that can communicate directly with the maschine. Could be very interesting to see if UML will become a full blown language without help from other programming language ;-) So it's just enough to have your textual description in UML... No need for Java.
    - Picture/diagram -> UML
    - Voice -> Not yet

    A bottom up action would be: Adding diagram description in Java language specification.

    I think, the easiest way is to add "UML diagram description" directly in Java language specification and use "Java" as a language for UML (not building another syntax). One vendor will surely *disagree* with this ;-) I call this the middle action.

    LoDe
    http://openuss.sorceforge.net
  18. IMHO, the most useful approach to separate business logic from the code that successfully delivers on its promise is BPM/workflow.
    The main question that I used to ask myself is whether its possible to use domain abstractions (DSL) to describe the model and then somehow generate a code in the REAL world.

    In general we could make domain-specific abstractions possible by reducing problem domain, e.g. to online trading, insurance, etc.
    Than again, this domain should be well understood to be formalizable.
    What does it mean formalizable?
    It means that the functioning of you system in this particular domain can be described by means of PROTOCOLS, and what your system will do is actually interpreting this protocols. Another way, functioning of you system is like theatrical performance with prescribed scenario and actors.
    This particularly explains why SDL/MSC and tools like Telelogic are successful in telecom field but fail in other fields.

    As of UML(xUML), it tries to achieve two mutually antagonistic goals simultaneously:
    1) Informal graphical notation in general one-size-fits-all sense.
    2) Formal graphical language that allows code generation sufficient for execution.

    Recent additions to UML such as action specifications(UML AS) allow thorough description of the model that once being described can be executed directly without any code generation (UVM) for simulation purposes, and after that can be mapped to executable system by using different model compilers targeted at some particular platform (J2EE, .Net, CORBA, Jini, POJO, etc.) *AND* problem domain.
    The last (*AND*) condition, that is central to the problem, is lacking in standard UML. This means that there is an implicit assumption that every problem domain will have its own extensions beyond UML. IMO this issue today is not addressed at all, and this is what BPM tries to address now.

    The idea of using orthogonal model has some additional advantages such as:
       O (1) Do not over- constrain sequencing (i. e concurrency & data flow)
       O (2) Separate computations from data access to make decisions about data access without affecting algorithm specification
       O (3) Manipulate only UML elements to restrict the generality and so make a specification language

    The idea of (1) will contribute to

      O understanding of the algorithms by specifying only those information that is necessary for algorithm functioning (such as if Knuth used UML-AS like language for describing his algorithms his books could be more readable)
     BUT:
       o This can be easily arguable because even simplest algorithms using graphical UML-AS notation do not fit into single page.
       o I hardly imagine programmers drawing a line just to connect arguments with input pins of standard operator like '+'. I think they will have a lot of fun here : )
     
      O transparent parallel programming
     BUT:
       o The problems that really need distributed computations are very rare (such as GCA).
       o Usually we have too few processors in the real world to implement such a parallel instruction (in UML-AS sense) pipeline.

    The idea of (2) seems more like utopia. Every problem domain needs its data structures (in terms of usability, performance, etc.). When you build application you already consider particular problem domain so you do not need this extra step of specifying general data model and then map it to your domain requirements. I also fail to understand how model compiler can be so smart to know what do you need. Isn't it just in-your-mind? What do we need is already here (OR mappers, etc).

    As of (3), as it was already said, UML is too general. So connecting things together does not provide much value over traditional coding &#8211; too much things remain untouched in UML(sessions, tx, security, etc.).

    There are a few vendors already shipping some kind of UML+AS tools like Project tech. (real-time, telecom), Kabira (telecom, business apps), but I haven&#8217;t seen any real complex system that was built using these tools yet.

    So the main reasons why I think BPM is more successful for business apps than UML+AS will be, is
    1) Restricted problem domain.
    2) Features specific to this domain (value adds).
    3) Coarse grained processes (BPM does not go to the operation/instruction level detail)

    Today, there are too many attempts to invent a <May be AspectJ is doing well because it is *CODE*, not XML.
    I know Rickard Oberg's point of view to this problem (aspect on JBoss, see presentation at java.no ) and in general I tend to agree with him but the problem of having thousands of XML files for describing aspects seems menacing. Furthermore, using XML and interceptors can easily result in significant performance degradation. IMO this will work well being coarse grained while static aspects can be hardcoded with AspectJ.

    Other techniques like GP are already successfully used in todays real systems but without that build-in-intellect feature. Most of todays research in GP seems to concentrate on this build-in-intellect and reducing search space. I have yet to see any successful commercial real world system that was built using this technique. This approach resembles me planning(AI) systems that have that degree of freedom in the domain space to search for an optimal solution. And I know that these attempts fail to address real issues in real systems today partially because they don't scale.
    Problems addressed there are trivial toy problems only. The real world business applications modeling has nothing to do with math, its about structuring and formalization (pre-math).
    If someone can prove me opposite I'll appreciate this much..

    I also do not think that pure UML modelers (TogetherSoft, Rational) are of great value because they can only generate code form class diagrams (again when we try to make some abstractions without restricting the domain, we fail). So the behavior of the system described in UML easily gets unsynchronized with the code. If we use it just for intuitive understanding about how things work, then we can easily invent our own notations that often will be more understandable and informative then standard UML. There is nothing special to UML in humans ability to understand pictures. So the word *standard* is inappropriate in this context.
    And to me, advantages of describing classes with class diagrams are hardly overweight textual form of writing code with simple hierarchical navigation window. Advantages of using other features are even more questionable. CC for example supports ERDs, but in TGs implementation they are more like a toy. If you are building relational data model for real application you will probably use ERWin or something..
    The same can be sad about business process diagrams. There is nothing real to BPM in CC then just a bzzword in diagram title. If you are really serious about using BPM in production you will use tools from BPMS vendors like Filenet, BEA, Sybase, etc.
    Wizards? Maybe they are good for beginners, but they often produce unmaintainable code and can be easily hand written and automated using some GP technique without all these annoying GUI windows. I don't want to step through wizard each time I need to create EJB, I want all my chosen entities be mapped to EJB at one go.
    So how can I benefit from such a one-size-fits-all CASE tool?

    The real thing that matters IMHO is developers environment that consists of some set of techniques and tools which are targeted at particular domain.
    I do not mean that graphical programming is useless in nature (on the contrary, humans ability to understand pictures faster than text is well known and proved) rather I&#8217;m saying that it would be useful being constrained to some particular problem domain (like BPM does).

    BTW: You don't need to invent new names (multiface-coding) for the old things (SOC, AOP, GOP, BPM, etc.).
    Regards,
    Basil