I've thought about this one for some time and would really appreciate some good feedback on it.
Access Transfer Object (ATO)
Situation and Addressed Problems:
Current OO design practice encourages encapsulation of access to stored data (database, files etc.) within one component or family of components. This hides access implementation details from client components and allows all security control and business logic concerning the data to be localised. This in turn leads to better de-coupling of responsibility and simplifies maintenance.
This is extremely sensible, but leads to problems in distributed environments where the component controlling access may be on a different JVM or even physical machine. The communications between components, while optimised, can start to be the cause of significant overhead when large amounts of data need to be moved.
When this becomes and issue, encapsulation is often weakened. The client component is given direct access to the underlying data and becomes party to implementation details of the underlying datastore losing the aforementioned advantages.
Alternatively, the component controlling access may acquire ?aggregate? methods that allow large scale data access and manipulation. These manipulation produces a much smaller data set for return to the client component. As long as the methods thus acquired are generic this is not a problem, however these aggregate methods may not be generic and may perform very specific functions from another part of the problem domain that may be better encapsulated by another component. Additional new aggregate methods may need to be created as the system evolves and require re-deployment of clients as the exposed interface changes.
Create an Access Transfer Object (ATO).
The component controlling access to the underlying data (server component for sake of brevity) provides a method returning an instance of an ATO, with parameters including session information (security credentials etc.) and parameters addressing which subset of the information is required. The return type of this method is an ATO interface providing only the methods for accessing or manipulating the underlying data allowed to the client component.
When a server component receives this request it validates the credentials and the data request and creates an ATO implementing the ATO interface.
The ATO is a class written with knowledge of how to access the underlying data within applied restrictions. The ATO is instantiated by the server component with appropriate information to gain access to the underlying data and with details of the restrictions to be applied.
The instance of the ATO is communicated to the client component which then can use it to gain semi-direct access to the underlying data without having to perform ?bulk? communication with the original server component. When the ATO is used by the client, it uses the information it was instantiated with to gain direct access to the underlying data, applying the restrictions to allow only the granted access to the client.
The ATO may provide only data retrieval methods but the ATO can be designed to allow data manipulation methods (within the server component applied restrictions) supporting batch operations.
By providing semi-direct access to the underlying data, the ATO pattern can dramatically reduce the performance problems associated with bulk data in distributed systems. Having the ATO abstracted behind an interface means that the client remains de-coupled from implementation detail.
The server component?s control of the instantiation of the ATO enhances this de-coupling and allows access, security and appropriate business logic to remain encapsulated and isolated from the client and remain within the domain of the server component.
The environment of the client component does have to be able to support the ATO?s access to the underlying data. The client component will have to have the ATO classes, the ATO interface and any supporting classes available.
Design of the ATO interface, the ATOs and the server component is critical to proper control. The ATO must be designed such that it can only be properly instantiated by the server component; using public, protected, private and ?package friendly? access modifiers appropriately.
ATOs need to be carefully aligned with caching and contention policies.
Factory Pattern: Extremely suitable for ATO creation by the server component.
DAO Pattern: A DAO can be very effectively facaded behind the ATO interface and the implementation of the ATO.
That's good quality of feedback, not an ego boost. ;-)
I don't really understand what this is for. Entity Beans exist to access data stores, and CMP takes care of it all for you. I think if you had some code examples or some better description of the problem this solves it migh tbe clearer.
It sounds as if you are passing serialized objects to clients that allow said clients to access data stores themselves. That is probably not going to work in many cases. Think of a swing app on several desktops. Network topology or security may not permit desktops from accessing data sources directly (which is probably why there is an app server in the first place; to be a facade and security layer).
This is primarily intended to be a server side pattern, not for consumption by a Swing applet client at all.
In an EJB 1.1 environment the client component would most likely be a Session Bean. Entity Beans do not do a good job of addressing bulk and batch work with underlying data stores due to the memory and communications overhead involved in large numbers of records (and large numbers of Entity Bean instances). This is a fairly major weakness in doing enterprise applications in J2EE which I am sure that you will have come across yourself.
In my work I have come across situations where code has to correlate large amounts of information from multiple sources and have to face a choice between good performance with poor OO design and poor performance with good OO design, this pattern attempts to achieve a more happy medium. The solution that I often use is stored procedures, but in a platform agnostic world, that is no solution at all.
The pattern does not set out to replace entity beans, merely to add another 'tool'.
Please read the pattern again, bearing all this in mind and hopefully you will see why I am proposing it. If not let me know and I'll try to explain further.
I re-read it, and I still am not sure I get it. If you are worried about getting a large amount of data into memory to process, what is the issue with Value Objects (Data Transfer Objects) and session beans to populate them? Again, I'm sure a simple code example would clear this up, but your description of the problem you are solving is so vague that it doesn't immediately lead to the conclusion of what you are proposing.
Yes, in EJB1.1 you would not want to use entity beans for large amounts of data, but you are free to just use JDBC and populate your own Value Objects.
To confirm my understanding, and perhaps others, can I present a concrete example of a problem and what I think is an example of solution that follows your pattern:
We are to develop a very simple, but full-blown J2EE complient (after all, we are the J2EE team ;-)) app that simply allows searches against a database, specifying various search criteria, and returns a result list. At the moment we expect the result lists to never be more than about a dozen rows, however we expect in the future the database may be populated from more sources, expanding the potential result list to perhaps hundreds of rows. We want to future proof this app, and restrict the rows returned to the user at a time, and have a paging interface.
The web server/servlet container will be on a different JVM, and indeed may be on a different machine as the app server/ejb container. So it would be best to restrict the data sent between them as well, to what is required at request time.
Your pattern would have the server component expose a method that returned an ATO with a subset of the result list resulting from the search parameters passed in, and a parameter that says, say, just give me the first 50 entities/rows please.
How the server implements this is up to whats behind the server component.
Incidently, we are to provide sorting capability on the result list as well. Would that be within the problem domain of this pattern?
Have I missed the point entirely?
Perhaps you could give an example of the interface?
It seems to me, if the ATO is created and returned by the server object, it would reside on the server itself, so we will not be avoiding the overhead of transferring large amounts of data over the network. I dont understand what we are gaining here.