To solve this problem I propose use Distributed Session Workflow pattern (if it is approvable for that). The main idea of that pattern is to desribe the use case workflow in the special XML-script or XSL-script with extensions(using products such as Apache Jelly,Ant,Xalan etc.), which will compile (at runtime or design time) in some internal form (pure java class,translet or other) for performance improoving. Each base enterprise bean must provide with opportunity to describe calling itself from this XML-descriptor (Workflow Descriptor). Persistence layer of application also must provide with ability to do some base persistence operation such as creation,deletion,search,updating etc for all persistent objects (Entity beans,POJO with JDO etc). Thus we have the pattern that can provide us with opportunity of construction,changing and arranging use case workflow of the session facade objects which interact with Entity beans,JDO,other Session beans,Message Driven beans etc. Except this we can use the same Session Bean for the different use cases,thus reducing the Session Bean instances and improoving performance.
I would be glad to hear all opinions of J2EE and XML specialists and not only (simple,
developers,managers,designers,architects etc.) about this pattern which I would like to use.
Sounds interesting but I have some issues that come into mind. I would really like to see some detailed ideas of possible implementation for this pattern, specially the part that deals with the synchonization and data transfer between the different components in the workflow.
If one use case involves the two or more components, it`s a really good idea to define their interactions through some kind of descriptor, for example a Jelly script that could automate some things as Alexandr proposes, but I am not sure, perhaps I haven´t read enough but what could be a good strategy to glue the components together? taking in account that the workflow can be reacommodated in the future. Perhaps using a Chain of Responsability (GoF) or a centralized orchestrator, something like PicoContainer but with an order of execution.
This container (workflow engine) could also be the way to transfer data objects between the steps (components) in the workflow.
Also, I think that the parts that require persistence (any strategy) could use a provided service, perhaps generated from the workflow engine or using AOP on top of it.
Overall, it sounds good, any more ideas?
Transferring data between different steps in the flow will severely impact the flexibility. You should think more along the lines of stateless beans processing steps in the flow. Also, it should be in the data what needs to be done, so you do not need any centralized functionality (or as little as possible), when doing that, you will elminate the need for distributed state management and locking/failover issues regarding a multi-server implementation for this issue.
In short: think stateless.