Java Development News:

TheServerSide Symposium June 2003 Coverage

By Jason Carreira, Nitin Bharti, Floyd Marinescu, Abhay Bakshi

29 Jul 2003 | TheServerSide.com

 Table of Contents
Day 1 Day 2 Day 3 Blogs @ TheServerSide Symposium

Symposium Sponsors:
   

Day 1

TheServerSide Symposium kicked off with a welcome speech from Floyd Marinescu, Director of TheServerSide.com. For those who couldn't make it out to JavaOne this year, the Symposium was a more focused alternative. Floyd made it clear in his speech that at TSSS there would be no bells and whistles, rock concerts or fancy backpacks, no cheeky keynotes or Schwartzian jokes about Christina Aguilera. No distractions or diversions from what developers really care about: focused, hype-free, hard core technical content. The Symposium was different both from Java One and from other technical shows because of the care and expert attention that was placed into choosing speakers and content. There was no call for papers - TSS personally invited key people who are contributing to the enterprise java space through their work in the specs, in open source, in book writing, etc.

The conference attendance was at over 330, with 40% local from within Greater Boston, 30% international from countries as such as Germany, Russia, Pakistan, Singapore and Japan, and another 30% from continental US and Canada. The attendee's represented nearly every vertical you can imagine, from insurance companies, to banking, to religious groups and the MIT Human Genome Research project. Floyd ended the intro speech saying that this proves that J2EE is now a mainstream and accepted technology investment, citing James Duncan Davidson's quote that J2EE will be like the 'Cobol of the 21st Century'.

The movers and shakers from the J2EE industry were all here: presenting sessions, sitting on panels, networking with the audience and debating on the future of J2EE. The conference was a plenum of Enterprise Java gurus and a concourse for developers, architects, and managers wanting to learn more from industry experts; there was just as much to be learned in between sessions, huddled with fellow developers and industry figureheads, as there was during the sessions and panels.


Bitter EJB: Common Programming Traps with EJB

This talk was given by Bruce Tate, author of 'Bitter EJB', which looks at common EJB anti-patterns. An antipattern is a prose description of a repeated behavior which produces negative consequences. It names the pattern, describes the problem's consequences, describes the problems that the anti-pattern attempted to solve, and offers other possible solutions to the problem. It's important to learn from the experiences of people who have been burned by these problems and to make iterative improvements. A successful anti-pattern communicates a problem and solutions and is able to prevent future mistakes by feeding into the software development process.

EJB is ripe for anti-patterns because it is very complex and gives many choices for implementation. EJB's are designed to solve transaction management and the clustering of distributed components. Distributed components are inherently complex and tricky. There are also complex problems to be solved, such as O/R mapping and long running transactions. The fact is, EJB's are ENTERPRISE software, and enterprise software is complex.

Attendees conversing after a session

Bruce looked at one of the classic anti-patterns: the "Golden Hammer". This is the temptation to use this new tool you've acquired to solve every problem. EJB has become the modern "Golden Hammer". Often, this is because people want to put this relatively new, hot technology on their resume. The term "Enterprise" has also become diluted, and many apps which do not require EJB are using them anyway.

What should requirements for EJB be? Loosely coupled components, massive scale, distributed transactions, and asynchronous APIs are good examples. Distributed transactions are a particularly strong qualification. If you don't need distributed transactions, clustering, and the like, why use EJB?

Stateful SessionBeans are not the black sheep they've been made out to be. State management can be done at several levels, including Http Sessions, Stateful SessionBeans, and the database. Http Sessions are good for web applications, but what if you need different clients to have state? If you need a persistent session, Stateful SessionBeans don't work as well and the database is a better solution. To use Stateful Session Beans successfully, enumerate the contents in the interface and only allow those objects to be stored. If at all possible, don't use state. If it can be done, carry your state with each method call so you can pool and reuse stateless instances.

Don't roll back transactions with system exceptions. Instead, throw the appropriate type. It makes it difficult to find the true system exceptions vs. what should really be application exceptions.

MessageDrivenBeans are asynchronous stateless components which implement the MessageListener Interface. Three points of breakage in JMS systems are:

  1. message production breaks
  2. The message payload breaks
  3. message consumption breaks

XML is often our payload in JMS. XML has become another "Golden Hammer". It makes sense around the edges, as configuration files and integration between systems. Why would you want to send XML between your components within one system? Consumers of JMS messages must also be well designed. Monolithic message consumers are an anti-pattern where all handling of a JMS message is in one large method, rather than breaking it out into handlers.

Often, people say they dislike EJB, but what they mean is that they dislike Entity beans. Entity beans suffer from a lack of flexibility. You can't inherit and extend Entity beans. Reentrant access is either on or off, but it can't be managed realistically when it's not actually a problem. Entity beans are also too different from plain Java objects and are both too heavy, because of the container overhead, and too fine-grained to be remote objects, creating the need for patterns such as the Session facade.

What can we do to make Entity Beans usable if we have to use them? Keep your primary keys short, as long keys are inefficient. Use CMP instead of BMP, since you allow the container to optimize loads, rather than suffering from the N+1 problem of BMP. Use local interfaces; don't try to make your Entity beans remote components. Manage your complexity using XDoclet or a good IDE to not have to keep your files and deployment descriptors in sync.

Metadata looks to be critical to the future of EJB in tools support and reducing the complexity of deployment descriptors. .NET compatibility will be important to the future, as they will need to coexist. Web Services support is being built into EJB already, as WS-I has been added to J2EE 1.4.


AOP, EJB and the Future of J2EE

Rod Johnson took a pragmatic approach to discussing AOP's potential to be used in our applications now. AOP doesn't have to be a scary or academic topic. EJBs provide an example of separating cross-cutting concerns, but they are limited because the aspects and pointcuts are both fixed. AOP can provide a simpler yet more powerful alternative to EJB. Although JBoss 4 is the first application server to supply an AOP framework, it won't be the only solution. AOP is not only going to be available or valuable as part of a J2EE stack, but will find its way into all levels of Java development.

After discussing the basic terminology of AOP (joinpoint, advice / interceptor, pointcut), Rod examined how these terms could be applied to what we see in EJB now. The bean implementation class is the "target object", transactionality and security play the roles of advice. In this case the joinpoints and aspects available are well understood but limited.

Dynamic proxies were originally suggested by Rickard Oberg last year as a way of implementing AOP; since then, many other implementations have sprung up, including Nanning and Rod's own Spring Framework. There are limitations in the pointcut model, since everything must be behind an interface, and property access cannot be intercepted. In order to reach this level of pointcut definition, you can move to byte code modification, which can be done in AspectWerkz, and JBoss 4. In order to really define complex joinpoints, you can extend the Java language, such as was done by AspectJ, which pioneered the AOP space.

The power of a richer pointcut model can be deceiving. Is the greater power needed, or even helpful? Defining a language extension, such as AspectJ, gives you more rope to hang yourself. If you don't need the extra power, the extra complexity is not worth the tradeoff.

Interceptors can be either stateless or stateful. Stateful interceptors are essentially "mixins" which add functionality to your object. Stateless interceptors are generally more reusable and can be reused across all objects. Introductions allow you to add additional interfaces to your target object, allowing your object to display additional behavior at runtime. Behavior is defined by the combination of interceptors, interfaces, and the target object.

So how much power is worth the tradeoff in complexity? Language extensions, such as AspectJ, force you to change your development and build process as well as adding a lot of complexity to learn as part of a project. Byte code modification can be very powerful but can cause problems with classloader hierarchies in appservers and may be difficult to port across containers. Dynamic proxies are very attractive from a compatibility standpoint, but must use reflection, which may have some performance implications and don't allow field interception. Field interception is somewhat dubious in its value, however.

How does AOP relate to EJB? EJB is basically AOP with a limited set of joinpoints and aspects. AOP is in fact calling into question the very existence of EJB. Why do we need a full EJB container if all we want is a transaction interceptor and a security interceptor? AOP allows us to have the best of EJB without many of the negatives. It also allows us to escape the monolithic container, to add our own interceptors and services to our call stacks, and to bring this power outside the server to the client if needed. Rod compared EJB containers to large and lumbering dinosaurs, versus a new set of small, lightweight containers, which are analogous to the mammals which eventually became the dominant life-form.

What are the dangers of AOP? There are performance and complexity concerns which must be carefully addressed, but patterns and best practices will develop to manage these dangers. As an analogy, OOP added a layer of complexity which is now relatively well understood and provides power and flexibility when used correctly.

AOP adds further benefits in the area of testing. Unit tests can be run outside a container. You can unit test your objects without the aspects, and separately unit test your aspects. This allows for a very fine-grained unit test suite.

There is a movement, called the AOP Alliance, to standardize the interfaces and metadata used by AOP frameworks, which will allow for the reuse of interceptors across AOP frameworks. This initiative is bringing together members from several AOP frameworks.


Keynote: Coding the Future - AOP and JBoss

Bill Burke, Chief Architect of the JBoss project, started this talk by defining Aspect Oriented Programming (AOP) in simple terms: AOP basically involves inserting an object between the caller of a method and the method itself. The audience nodded in agreement. Later in his talk, he delved deeper into the responsibilities of this intercepting object.

AOP is quite a new programming paradigm. Many programmers still don't fully understand how to use AOP in their programs. It will take some time before AOP gains wider use and the true AOP 'paradigm shift' occurs.

AOP provides a clean separation between system architecture and application code which allows you to make architectural decisions later in the development process. EJB is an example of static AOP. Security, transactions, and persistence in EJB are examples of pre-packaged aspects.

Bill asked the audience not to compare AOP and OOP verbatim. AOP will not replace OOP; in fact, AOP complements OOP and both will continue to exist together.

He then moved on to explain the elements of AOP:

  • Interceptors/Advices
  • Introductions
  • Metadata and Metatags
  • Pointcuts

An Interceptor is an object that intercepts a method invocation before the method executes. Whenever an Interceptor exists, it embodies behavior to add, remove, or replace infrastructure functionality. It provides perfect pluggability with no changes required to your business logic.

With pluggability, a chain of interceptors generally exists where an interceptor is pluggable or removable. Every pluggable behavior can be attached by writing a simple interceptor class.

Another good thing about interceptors is that they can detect abnormal behavior and decide either to notify or micro-boot.

Bill described JBoss 4 as Aspect-Oriented Middleware. Middleware, by nature, he says is cross-cutting. Whenever you introduce Aspects in your middleware, you should experience smooth, fluid, iterative development with cleaner separation between System Architect and Application Developer.

Bill showed a demo in which he defined an Interceptor that tracks the pageviews on a website. He had an existing class running in Tomcat and added the tracking aspect by simply dragging and dropping the pre-defined aspect onto the class. He then clicked around on the local JBoss server instance and showed the pageview results.

AOP is natural in middleware. It provides ease of software development and can be applied to any software development project. AOP is continuously evolving and offers a lot of opportunities for creative minds.

This talk was very well received. "They (JBoss) are being innovative which is not what a lot of open source projects do that try to follow commercial projects," said Jason Carreira. The presenation, said Vikram Kumar of jPeople, "cleanly laid out the principles and showed real life examples of where they're applicable."


Transactions, Distributed Objects and J2EE

Bruce Martin of The Middleware Company presented this session and started off by identifying the need for transactions and ACID properties. We need transactions because things fail, things need to happen concurrently, and because we need to persist data reliably. Without the ACID properties, we can't guarantee that our code will perform its function correctly. We need to guarantee the atomicity (all or nothing), consistency (things are in the pre-transaction state or the post-transaction state, but nothing in the middle), isolation (preventing concurrent changes to the same data from different transactions), and durability (our changes are persisted) of our changes. Transactions allow us to simplify our error handling and fault monitoring.

Defining our transactions programmatically in our code doesn't allow for code reuse, because we may want to use our transactionally scoped method inside a larger transaction. We can try nested transactions, with the outer transaction nesting the transactions in the methods called. Unfortunately, database vendors don't provide this functionality. Instead, let's let the system decide our transactional boundaries based on our meta-data declarations of our transactional requirements. This is what container managed transactions in EJB do, by handling the transactional scope for us as our components are called.

Transaction attributes must be designed carefully. If your session facade has a transaction attribute of Supports, Never, or Not Supported, every call to an entity bean or other session bean requiring a transaction will create a new transaction. Unfortunately, many containers make Supports their default transaction setting.

Long transactions are the real problem. For ACID transactions to be effective, they must be short-lived. In pessimistic locking, resources will be held and requests for them will be blocked. With optimistic concurrency control, our system can continue to work, but if there's a serialization conflict, the work will be lost.

Web activities typically involve long transactions with many steps and unbounded timelines, since we wait for user input. We can deal with this by using several short transactions and compensating transactions, which are essentially scripts to rollback the committed changes of earlier transactional steps. These steps and related compensating transactions are a specific use of workflow. However, this set of short transactions fail the isolation requirement of ACID transactions, as the changes of the committed steps are visible to the system before the process has completed.

There are ongoing efforts to define more flexible models for dealing with this. The OMG Activity Service was adopted in 2001, and JSR 95, the Activity Service for J2EE and Java, is the Java implementation of this. This is also a very hot and important topic in the Web Services arena, where long lived conversations and service invocations need to be choreographed into one "transactional" process.

Distributed transactions can allow ACID transactions across resources. The transaction manager talks with the transactional resources to allow them to rollback or commit as one unit. This is supported by the XA protocol for distributed transactions with XA JDBC drivers and JTS to manage distributed transactions. However, it can be a problem to delegate concurrency control to each resource, since isolation cannot be maintained across resources. If both resources use pessimistic locking, transactions can block on concurrent changes, whereas with either optimistic concurrency or a mix of pessimistic and optimistic locking in your resources, data consistency can be compromised. It is important to realize that JTS distributed transactions provide failure atomicity for distributed data sources, but no concurrency. Also beware that many times the problems with XA transactions are caused by problems with less than robust XA JDBC drivers.


Productivity Analysis - Model-Driven, Pattern-based development with OptimalJ

This was a special talk delivered by Mike Burba, Product manager of OptimalJ, and David Herst of the Middleware Company during a scrumptious 3-course meal sponsored by Compuware to a near capacity Colonial Ballroom. The audience was impressed with the meal, but initially a little skeptical of the case study; however, the talk wasn't a product pitch for OptimalJ as many had initially thought. It was more a testimony to the productivity increases that can be observed when Model Driven Architecture (MDA) is used vs. traditional approaches to software development. The results, as Ed Roman put it, were 'staggering'.

This was in essence a preview of the results, which haven't yet been publicly announced but Mike insisted that the 'top level talent' present in the room deserved to know about this in advance.

"It's all about simplification," said Mike. When the development process is easy, this increases the number of developers in the J2EE space. It is this move towards simplification in in tools and IDEs that is driving innovation in the development space. Mike also cited J2SE 1.5 and EJB 3.0 as a move towards ease of development to make the Java platform accessible to more developers.

Models are a 'pure representation' of the business domain whereas patterns are the architectural blueprints. Patterns are the best practices. Tools are used to transform those models into working applications; however, this cannot be done without standards such as MDA.

OptimalJ is a model-driven, patterns-based J2EE development environment that espouses the MDA standard. It automatically transforms visual models into working applications. It also accelerates development, integration, and the maintenence of applications.

The objective of the study was to scientifically prove the benefits of MDA development using a neutral third party of experts such as The Middleware Company. There were two teams, the MDA team and the 'traditional' team, consisting of three members each. Each team had a senior architect and 2 developers. The study compared the development of the famous, and sometimes infamous, PetStore application.

The MDA team automatically generated their code from UML diagrams while the traditional team created prototypes. Mike referred to the MDA approach as 'technology-agnostic development'. MDA allows you to observe and modify your Platform Independent Model (PIM), independently from the underlying platform and its respective Platform Specific Model (PSM). Changes made to your PIM should naturally be reflected in your PSM and vice versa.

In the case study, the MDA team completed the development of the application in 330 hours while it took the traditional team 505 hours, which is testimony to Compuware's claim of OptimalJ allowing you to develop code 30 to 40% faster than the tradional approach. There were many interesting questions asked in the Q&A that followed reflecting a renewed interest in the MDA.

Mike referred to an article by Stefan Tilkov, MDA From a Developer's perspective, published on TSS back in December 2002. The article gives a developer's perspective on MDA as opposed to a manager's. Developers have always been wary of such methodologies, and this presentation was a step towards removing some of the skepticism about MDA tools.


Open Source Enterprise Development Panel

One of the most highly anticipated talks of the Symposium was the Open Source panel, featuring Eric Hatcher, Vincent Massol, Mike Cannon-Brookes, Christophe Ney, Bill Burke, and Gavin King.

Floyd acted as moderator for the panel and initiated discussion by asking how open source can create a higher level of quality in code.

"You can't make a blanket claim that (open source code) is higher level than commercial code because there's no way to tell the quality of commercial code", responded Gavin King much to the amusement of the audience. Gavin is founder of the Hibernate project.

Mike Cannon-Brookes, founder of the open-symphony group and author of JavaBlogs.com claimed that in open source "there are more people there to identify, find and fix bugs and there are different perspectives."

There was discussion on how a certain 'competitiveness' in open source projects spurs people to make the code better. "If you look at XP, and Unit Testing, these were all methodologies developed by closed source companies but are being innovated by open source. There are 5 or 6 different AOP frameworks driving innovation through 'competition'," said Mike.

Gavin criticized expert groups formed by vendors. An open source project can be small enough in the beginning to solve small problems; however, competition fuels innovation. You don't have that with JSRs since they don't compete with one another.

Mike Cannon-Brookes, Vincent Massol,
Jasson Carreira, Juergen Holler and others

Floyd asked the panel about some of the latest innovations open source is bringing to Java.

"Of course one of them is Unit testing. Junit, Cactus. Build frameworks such as Ant and Maven," responded Vincent Massol, Creator of the Apache Cactus project.

"I like the fact that these communities are coming together. These projects are bringing developers together. People don't need to reinvent the wheel. Lucene really rocks. The whole Agile movement is very exciting. Continuous integration makes us focus on quality and makes us deliver stuff right away," said Erik Hatcher, Apache Ant project committer and author of Java Development with Ant.

But how do you choose between all these different open source frameworks?

According to Christophe Ney, president of the Object Web Consortium, you need to look at the popularity, licensing, and business models behind the product.

Bill Burke, Chief Architect of JBoss, talked about the importance of the branding of a product. Apache and JBoss are well known, proven 'brands'.

Erik commented that just by following the blogs of respected community members, you can learn a lot about what frameworks are becoming popular. He cited Hibernate as an example and how there was recently such a buzz around it. "When people start talking about it, it's probably worth checking it out."

Ease of use, easy start up and good documentation were also mentioned as important criteria for picking a framework. Another interesting point was that a project should have a good leader and spokesperson.

There was a brief discussion on JBoss documentation and whether JBoss will be providing a JDO interface.

"We need to sell our documentation to make money off open source. JDO will be a 'personality' of our persistence engine. We are not going towards JDO as the preferred area. We are not 'attached' to JDO. If the query language sucks, we won't use it," said Bill.

Floyd asked the panel whether closed-source companies are stifling innovation.

Bill talked about how the licensing model is the killer of innovation and that as a product is commoditized, it ceases to grow. Companies cut the service organization as a product/company begins to die.

Mike disagreed with Bill. He thinks that closed source companies are becoming more innovative to compete with open source. Closed-source companies that are being forced to close DO start becoming more innovative as a result.

There were disparate opinions on the panel as to whether open source software is hurting commercial revenue models. Bill said that it is destroying revenue models, and that companies that are forced to base their model on a service model and not a licensing model will be hurt in the future by open source.

In the Q&A Session, Randy Heffner (Giga VP), who was in the audience, asked whether people are getting sick of not making money from their work in open source projects. Bill, who had the mike in his hand while the question was being asked, smiled and passed it over to the other panelists.

Christophe and Mike commented that some developers are paid by companies and that not everybody codes on these projects for free. For instance, people who have contributed to Apache have been from commercial companies. They have been doing it on behalf of their companies.

Gavin further commented on the nature of open source projects in how people contribute what they want and go away. There are few that stay with the project for it's life. There are people who come and go.

There is another inherent value in coding for open source beyond money which doesn't necessarily mean you're doing it for free, according to Vincent. Recognition, writing books, good PR, contributing to academic and software institutes are all benefits. And also, people have fun doing it.

Somebody from the audience, a consultant for the U.S. government, commented on the increasing phenomenon of open source code making it into shrink-wrapped commercial products. He said that governments lock these products out because of the possibility of malicious code making it into commercial products. As a result they prefer Microsoft as a more trusted vendor.

Bill retorted by saying that the Department of Defence recently issued a press release in which open source software must be looked at on equal footing with regards to the Department of Defence. He mentioned how even the German election ran on open source software and that nuclear tracking software is running on open source. As states are broke and become hard up on cash, governments are now looking at open source as a cheaper alternative. There is also an issue of governments having a moral and ethical responsibility to try and save public money by using open source.

Christophe commented that governments in Europe are pro-open source and that it is helping to revive the Europen software industry.

The Q&A continued late into the night as people asked about uniting JCP standards with open source implementations and even the possibility of open sourcing Java. People were also asking about who can commit on popular open source projects and how one can go about becoming a committer.


Day 2

The hardcore afterparty: Sun's John
Crupi, Giga's Randy Heffner, and
other guru's duking it out over beers
with developers and open source guys
including Hibernate's Gavin King, and others.

The second day of the Symposium kicked off with a Keynote from Tyler Jewell. Some of the highlights of the second day were talks from Rod Johnson on J2EE Myths, 'Patterns Architectures & Micro-Architectures by John Crupi, and a very unique talk on JavaBlogs.com by Mike Cannon-Brookes. BEA provided a pizza and pasta lunch and in the evening, TheServerSide team hosted a Beer Tent party. Also in the evening was a keynote presented by Rick Ross of JavaLobby.com and a panel on the future of J2EE featuring many important industry figureheads such as Cedric Beust, John Crupi, Rod Johnson, and Jim Knutson. Attendees gathered around and debated late into the night about the JCP and Open Source technologies.


J2EE Myths and Why They're Dangerous

J2EE myths are common beliefs that cost money and time. They make it unnecessarily hard to develop applications because they're trying to solve phantom problems which they may not have. You should solve only the problems you actually have because there's some kind of cost or trade-off for every problem solution. J2EE has a pretty bad track record in terms of complexity. There's a fascination with doing the full J2EE stack.

Today's myths include:

  • There are no simple problems
  • Database portability is always required
  • It's ok to defer application server choice
  • Distrust relational databases
  • J2EE developers always know best
  • J2EE allows developers to forget about low-level issues
  • Achieve scalability through distributing objects (Stateless SesssionBeans with remote interfaces)
  • J2EE = EJB
  • Entity beans are a credible O/R mapping tool

As a community, we can be somewhat arrogant. We think we know what's best, even if our highly paid DBAs are telling us we're causing problems. A lot of these problems are caused by over-engineering. We need to apply the principle of YAGNI ("You Ain't Gonna Need It"). We should question our assumptions and default development modes, such as the assumption that we need JTA for transactions in case we ever need multiple databases.

Over-engineered solutions are more difficult to develop, with more lines of code by far, and harder to maintain. Over-engineering costs us money. We overspend on application servers that provide more than we need. The extra layers of infrastructure make development cycles slower and cause a performance overhead for every call.

One of the causes of over-engineered solutions is an over-emphasis on patterns. If you actually have the problem that a pattern solves, then the pattern is probably the right way to solve it, but don't apply patterns for problems that aren't there. This is an example of letting the technology drive the solution and assuming requirements that don't exist.

Portability is often overly stressed in J2EE. There are many types of portability, such as portability between databases (whether it's within a category, like RDBMS's, or across categories, like RDBMS vs. OODBMS), between application servers, or between operating environments. Only portability between operating environments works well, at nearly zero cost.

Database portability is usually a myth. It's a very common assumption, but is a relatively unusual business requirement. Only a small minority of J2EE applications must run on multiple databases. Applications are very seldom ported from one database to another because companies are very tied to their database, often more so than the applications running on top of it. Database portability isn't free because of the differences in capabilities and syntax between database implementations. It's impossible for an application to completely hide these differences.

The simplicity of porting between application servers is overrated. Running an application on an application server is more than the J2EE spec. There's management and operational aspects which must be solved. The J2EE specification is not ever going to be able to define every needed aspect of an application, so you'll need to use vendor specific implementations for these grey areas. An example of this is Read-Only entity beans.

Rod Johnson shows his book
during one of his talks

J2EE is not about portability. It's a good solution for building applications. Portability is a bonus, but an obsession with 100% portability can cost time and money. No-one will port an application that is not successful. .NET's lack of portability can be a boon for them, as they are only focused on delivering a solution.

There are real benefits of portability, however. You can't port at zero cost, but at least portability is an option, unlike with a proprietary framework. You should think very carefully before using proprietary features which lock you into one vendor. We should value design portability over code portability. We should put proprietary features behind interfaces to enable them to be replaced later.

There is a distrust of relational databases in the J2EE world. O/R mapping is often valuable, but sometimes set-oriented data access is the best option. Sometimes, even stored procedures in the database are the best answer. Any direct usage of the database should be encapsulated behind interfaces, though.

We should be open to non-Java technologies that get the job done. It's valuable in an organization to standardize on a development platform, but don't marginalize the systems which work now. It's important to value the resources you have, including legacy code and legacy application developers. We should learn to listen to experts in their fields, especially DBAs.

It's tempting and easy to distribute components in J2EE, but it's usually wrong. The problem spaces where distributed components are needed are relatively few. Heavily distributed architectures are likely to be slow. Distribution also breaks down our OO designs, as we need to be concerned with network access, serialization, etc. which forces the use of Data Transfer Objects. Even trying to use the local optimizations of application servers is not the answer, as you're deploying them with remote semantics.

Rather than distributing components, distribute entry points with hardware switches, web connectors, through the servlet container, or using SLSB stubs. A complete request / response cycle should be handled on one node.

J2EE being equated with EJB is perhaps the most expensive myth. EJB is needed for a certain set of problems, but it is overkill for most applications. EJB simplifies and makes possible certain very difficult problems, but is overkill for most needs. EJBs add complexity to your development and are very difficult to test. They cause delays in the build-deploy-test cycle, cause classloader complexity, and introduce a lot of overhead.

According to Rod, Entity beans are a poor technology. Performance issues are being addressed, but performance is not what it should be or needs to be. Entity beans are a very complex component based programming model. But why? Nobody advocates using Entity beans as remote components, so why make them components at all? Further, the component model causes problems with Entity beans as an O/R mapping tool. Inheritance is not supported and you are limited to one entity per table. Persistence should be transparent. Persistent objects should be true objects, with behavior, and should be easily testable. Finally, there are better solutions such as JDO, Hibernate, TopLink, or even JDBC. These solutions solve the problem well, without the cost of Entity beans.


Java.blogs: The movement, the site, the technology

Mike Cannon-Brookes started this talk off by describing a weblog, or blog, as an online personal journal and showed screen shots of his own blog, "Rebelutionary". He described blogs as the best place to hear personal views of new technologies and compared them to the user comments on Amazon. He next went on to describe RSS (Rich Site Summary) as the format used for syndicating content in the blogging world. He described the role of the "news aggregator" as the client of these RSS feeds to show an aggregated view of multiple sources and javablogs.com's place as a web-based aggregator of Java-centric blogs.

Mike next went on to describe some Java tools for blogging. Roller is the most popular and advanced of these, and is built using Struts, Velocity, and either Hibernate or Castor for persistence. SnipSnap is a combination of Blog + Wiki with an advanced Wiki rendering engine that's mostly hand-rolled technology. Blojsom is a Java port of a Perl tool, Bloxsom, and uses the file system for storage and a pluggable view layer using either JSP, Velocity, or FreeMarker. The easiest way to get started with Java blogging is to go to freeroller.net, which is a free installation of Roller hosted at Javalobby.

Javablogs is a web based aggregator of Java-centric blogs. It currently aggregates over 1.5 million words in over 18,000 entries from over 360 blogs. Javablogs provides a central feed for all of the aggregated blogs, a searchable index of the entries, and a ranking of popular entries based on number of click-throughs. It can also send you a daily update (the send time configurable per user) of the most popular 20 posts. It provides a true community of equals, with each post rising or falling on its own merits, with no central editor.

Javablogs.com was built by Atlassian (Mike's company, and the makers of Jira, an issue tracking tool) using their pre-built components in about 48 hours. For scheduling, they use the open source tool Quartz combined with their own XML configuration utility, which they're making open source. For persistence, they use the Open for Business Entity Engine (Ofbiz EE), which is a part of the Ofbiz project. It's a light wrapper over JDBC and is "proud to be relational". OfbizEE is configured through 2 XML files, a logical model and a database-specific field mappings file, from which it can create or update an application's database schema.

Indexing in Javablogs is handled through Lucene, which allows full text searches of the blog entries. User management is handled through a combination of OSUser, a project from OpenSymphony, and their own security framework, which is also going to be released as open source. OSUser acts as a plugin user manager for most application servers and as a cross platform user management API, and can be backed by many kinds of user and profile stores, including LDAP, JDBC, Hibernate, and Entity EJB. The Atlassian-security package provides a rich security model, including URL pattern matching and WebWork Action security.

The view layer of Javablogs is implemented using WebWork, an MVC framework from OpenSymphony for building web applications. WebWork is a very clean framework and allows many different view technologies, including JSP, Velocity, and JasperReports. Although Javablogs is built on WebWork 1.3, Mike was going to give a talk on WebWork 2 later, so he left out the details in this talk.

Javablogs uses Sitemesh, another OpenSymphony project, to do page templating. Sitemesh applies the decorator pattern to HTML, and is able to wrap a page's content with whatever template you specify. Mike showed an example of decorating the Javablogs site to look almost exactly like TheServerSide. Mike explained how Sitemesh understands HTML and is able to pull the decorated page apart into a Map of named pieces to be put into the appropriate spots in a template.

The final piece of the website is generating automated emails. For this Javablogs uses Velocity, an opensource text templating engine from Jakarta. Velocity is similar to Sitemesh in that it merges provided content with a template to generate the final output.

To pull all of these pieces together, they use Maven as the automated build tool. Maven provides a lot of pre-packaged build pieces, unlike Ant where you have to duplicate the same common build pieces in every build file. Maven will also download library dependencies if you don't have them locally.

Future challenges for Javablogs include performance issues and filtering / rating. The site currently generates over 1GB of traffic per day, even with a GZip filter. They make use of OSCache to cache pages, and Mike says it's important to identify the 20% of your pages that generate 80% of your traffic. They're currently working on new algorithms for bringing the best content to the top and looking at options such as Bayesian filtering and a distributed rating system.


WebWork - Strutting the OpenSymphony way

WebWork2 is a pull-based MVC framework focused on componentization and code reuse. It is currently in pre-beta, but is being used by several opensource projects and a few commercial projects in development. This is the second generation of WebWork, which was originally developed by Rickard Oberg, and in this release, what was WebWork has been broken into two projects, XWork and WebWork 2.

Xwork is a generic command pattern implementation with absolutely NO ties to the web. Xwork provides many core services, including interceptors, meta-data based validation, type conversion, a very powerful expression language (OGNL - the Object Graph Notation Language) and an Inversion of Control (IoC) container implementation.

WebWork2 provides a layer on top of Xwork to do HTTP request / response handling. It includes a ServletDispatcher to turn HTTP requests into calls to an Action, session and application scope mapping, request parameter mapping, view integration with various web view technologies (JSP, Velocity, FreeMarker, JasperReports), and user interface components in the form of JSP tags and Velocity macros wrapped around reusable UI components.

An Action is the basic unit of work in WebWork. It is a simple command object that implements the Action Interface, which has only one method: execute(). Action implementers can extend the ActionSupport class, which provides i18n localization of messages (with one ResourceBundle per Action class and searching up the inheritance tree) and error message handling including class level and field level messages.

Actions can be developed in one of two styles: Model driven or field driven. Model driven Actions expose a model class via a get method, and the form fields refer directly to the model properties using expressions like "pet.name". Xwork uses Ognl (the Object Graph Notation Language) as its expression language, and when rendering the page, this expression will translate to getPet().getName(). When setting properties, this will translate to getPet().setName(). This style of development allows for a great deal of model reuse and can allow you to directly edit your domain objects in your web pages, rather than needing a translation layer to form beans. Field driven Actions have their own properties which are used in the view. The action's execute() method collates the properties and interacts with the model. This can be very useful when your form and model are not parallel. Even in this case, the powerful expression language in WebWork can allow you to compose your form fields into aggregate beans, such as an address bean, which you can reuse to simplify your action classes.

WebWork2 allows you to build your own reusable UI components by simply defining a Velocity template. This is how the pre-built components of WebWork2 are built for common components such as text fields, buttons, forms, etc. and made available from any view type (either JSP or Velocity at the moment). These components are skinnable by defining multiple templates for the same component in different paths. If your components include the default header and footer templates that are used in the pre-built templates, then they will inherit the ability to automatically handle displaying error messages beside the problem form field. These custom UI components are especially handy for reusing templates which handle your custom model types or for things like date pickers, which Mike showed as an example.

Interceptors in Xwork allow common code to be applied around (before and/or after) action execution. This is what Mike calls "Practical AOP". Interceptors help to decouple and componentized your code. Interceptors can be organized into stacks, which are lists of interceptors to be applied in sequence, and can be applied to actions or whole packages. Much of the core functionality of Xwork and WebWork2 is implemented as Interceptors. The common basic examples of Interceptors are timing and logging, and these are built in with Xwork. Mike went through an example of an interceptor to identify users of events via email. This interceptor has its own external configuration file which specifies which users are interested in which events, and it compares this configuration with the action invocations passing through it to determine if any messages should be sent.

Xwork's validation framework allows for decoupled validation of action properties. It is implemented as an Interceptor and reads external XML files which define the validations to be applied to the Action. Error messages are loaded from the Action's localized messages and flow through to the UI. Validator classes can be plugged in to add to the set of bundled validators. The bundled validators include required field and required String validators, range validators for Dates and numbers, and email and URL validators. Xwork also includes expression validators at both the Action and field level which allow you to use any Ognl expression as the validation.

Inversion of Control (IoC) removes the burden of managing components from your code and puts it on the container. The container takes care of managing component lifecycle and dependencies. EJB is an example of IoC, but with limited services. IoC promotes simplicity and decoupling of your components and encourages your classes to be smaller and more focused. Unit testing is also simplified, as you can just supply MockObject instances of the services your code depends upon during testing. Xwork and WebWork2 provide a web-native IoC container which manages component dependencies. In WebWork2 IoC is implemented as lifecycle managers (SessionLifecycleListener, etc) and an Interceptor. There are 4 component scopes in WebWork2 IoC: Application, HTTP Session, HTTP Request, and Action invocation. IoC in Xwork / WebWork2 is purely optional, so you can use it if you want it.

Xwork / WebWork2 allows for sets of Actions and their views to be bundled as a jar file and reused. Your main xwork.xml file can include the xml configuration file of the jar file because they are included from the classpath. Similarly, if your views are Velocity templates, you can bundle your views in the jar file and they will be loaded from the classpath when rendering. This allows for componentization of your application and reuse of bundled Actions across applications.

Mike finished up with a comparison of WebWork2 vs. Struts. Struts is obviously the 500 lb gorilla in the web MVC space, so why use WebWork? WebWork's pros include being a smaller, simpler framework, not having to build ActionForm beans, making it very simple to test your Actions, having multiple well-supported view technologies, simpler views with less JSP tags and a more powerful expression language, not having to make your Actions thread-safe, not having your Actions tied to the web, and not being part of Jakarta :-). WebWork2 also adds many new features such as Interceptors, packages, IoC, etc. WebWork's cons include being a smaller project with fewer books and less tool support, having less standards support for specs like JSTL and JSF, and not being part of Jakarta :-).


Patterns Frameworks & Micro-Architectures

John Crupi's talk on patterns started with general definitions of patterns and why they matter. John then discussed the 3 business tier patterns in the 2nd edition of Core J2EE Patterns. The Business Object pattern reintroduces the notion of a domain model to j2ee developers, whom John agreed are still not developing enough with object oriented constructs.

Domain store is another interesting pattern which answers the problem of how to separate persistence from your object model. The pattern identifies some of the core abstractions that need to be inplace inorder to implement a persistence framework, and the book goes into 2 options - a custom implementation, and a JDO implementation.

John introduced the notion of pattern frameworks and micro-architectures. A pattern framework is a set of patterns that when used together can provide a nice set of skeleton code that can be readily applied on new projects, or in response to particular usecases. Micro-Architetures (called Reference Architectures in Floyd's EJB Design Patterns book), are sets of complete architectures that can result from the most common combination of patterns. They represent a higher level of abstraction than the individual patterns and can be applied to solve certain usecases / problems.

From his new book, John identified two micro-architectures: one which combines business delegate, service locator, session facade, transfer objects (their new name for data transfer objects / value objects), transfer object factories, business object, and domain store. They outline another micro-architecture for doing web services workflow. Another potential micro-architecture which Floyd discussed with John after the event is the use of the command pattern to organize business logic, similar to the writeup of the EJB Command pattern in Floyd's book. An architecture based on the command pattern replaces the business delegate, session facades, and data transfer objects.


Java Keynote: Where We are and Where We're Going as an Industry and Community

Rick Ross, the founder of Javalobby, says that he doesn't see a current danger facing Java and that it's a pretty good time for us. .Net is less of a danger than it seemed even a year ago because Microsoft has a lot of internal confusion on how to market and use .Net.

Sun is planning on investing $500 million toward a Java marketing effort. That's not a lot compared to some of its competitors, but it's a lot more than has been spent in the past. The java.com site is an effort to make Java visible and comprehensible to the public. It's also good to see Sun making an effort that has been lacking for some time with java.net. It's unfortunate, however, that they didn't consult with the people who have been in the portal business for some time. The announcement of the deal with HP and Dell to bundle the current JVM with their machines is a major deal to get Java out to the public.

Floyd solicits TSS feedback
just before Rick Ross' keynote

There was recently a ruling from the appeals court on the private suit from Sun against Microsoft for their anti-competitive behavior against Java. The previous ruling had been to force Microsoft to bundle the current JVM with Windows. On Thursday the appeals court vacated the ruling to carry Java but upheld the copyright infringement and keeps them from calling the MS JVM "Java", so OEMs bundling the MS JVM and calling it "Java" will also be copyright violators. This has been a major influence in the deals with OEMs to bundle a current JVM.

Rick asks, what is the Java vision? Java's vision is to maximize code portability and minimize porting costs. Java is also about agreeing on standards and competing on implementations. Finally, the Java community is interested on collaborating as a community through such efforts as the JCP.

The current economic conditions are difficult for the technology industry. Nice-to-have solutions are not selling. Money is tight and sales are hard to come by. Technology salaries are down, but not as badly for Java developers as in other technologies. Companies are still loath to let their Java developers go.

So where are the key markets? The enterprise market, the embedded market, the wireless market, and the desktop market.

In the Enterprise market, Java's ability to work with databases has made it very valuable and made it an important competitor to Visual Basic. Java can help extend the life of legacy systems. Java provides value for companies with heterogeneous computing environments due to the strength of Java as an integration tool. The Enterprise market is also a global market, and the current successes can be brought to other emerging economies as they come to need the same solutions.

The audience attentively listens and
takes notes during Ross' evangelical keynote

Rick thinks we developers have believed the FUD about desktop apps, which may have been well earned early, but now is not as valid. The FUD doesn't make sense, because Java development tools dominate the market. Tools like IntelliJ, JBuilder, Together/J, etc. prove that Java client apps can be excellent. Apps like Piccolo for doing panning and zooming show the power of Java for innovative Java applications.

In the wireless market, ringtones are a billion dollar market, and Java is deployed in millions of handsets. Everywhere in the world, this is a big business, a lot more than in the USA. Smart data and apps and the infrastructure to deliver them are going to be huge business opportunities. Partnering with carriers is crucial, but difficult, but if you can get in it can be a huge opportunity. A relationship between J2EE and these mobile markets is inevitable.

There are over 10 times as many processors sold into the embedded space as in computers. We may not know that Java is being put into these devices, but it will be there, and this will create opportunities.

There are lots of opportunities for fun in Java, like Opensource development and game development. Another opportunity is in volunteering and teaching to give back to the community.

What can the Java community do better? First, take more responsibility for the marketing and promotion of Java. It's not Sun's job to do all of the marketing of Java, and their marketing track record is not all that great. Enlightened self interest should lead us to form an organization to make the public more aware of Java. We should work together to form an answer to the question "What is Java?" The Java community needs to be more cordial and professional in our public forums and disagreements, because it looks bad to outsiders looking at our community. We need to make ourselves easier to appreciate. We need to be better at showing the quantifiable benefits we bring to our organizations. We need to not get too technical too fast. It makes people feel like we're talking down to them and puts them off.

Attendees go for seconds at the TSS barbeque
just before the TSS feedback forum

Rick believes that there's no major challenge now. We face the problems of a maturing industry and need to make sure that we don't get complacent. We must learn to highlight the successes we create.


The Future of J2EE Panel

After the TSS evening barbeque, the audience had settled back into their seats to listen to 'The Future of J2EE' panel with Cedric Beust, Jim Knutson, John Crupi, Mike Burba, Floyd Marinescu, Rick Ross, and Rod Johnson.

Each speaker got a chance to introduce themselves and briefly comment on what challenges the J2EE community faces moving forward.

Cedric and Rod both talked about how J2EE is currently too complex and that we need to simplify it using AOP.

Randy Heffner, Giga VP, mentioned how we need more leadership roles in the community, similar to how Microsoft has Chief Architect roles. There was talk of how this might assuage some of the political and procedural hindrances in the JCP. The panel briefly discussed the effectiveness of the 'democratic process' by which the JCP runs and there was agreement that it's not perfect in its present form.

The Future of J2EE Panel:
(from left to right) Mike Burba, John Crupi,
Jim Knutson, Randy Heffner,
Rick Ross, Rod Johnson, Cedric Beust

Jim Knutson commented on the alignment of various JSRs which are in the works today. J2SE, J2EE, and J2ME need to find more ways to relate to each other and we need to start looking at the family of platforms as a whole. He also thinks that J2EE is going to become more of a commodity.

The audience laughed when John Crupi said in his introduction that "J2EE is perfect and we don't need this panel." Crupi thinks that we need to challenge our tool vendors to reduce complexity which will ultimately put us on a better footing with Microsoft.

"We must keep standardization, but standardization does not work before experience," said Rod Johnson. J2EE has brought many standards but specification by committee doesn't work; there are 1 or 2 parts of J2EE that haven't worked, according to Rod.

Knutson disagreed. He thinks we need standards first. We obviously cannot beat Microsoft in the area of standards because they can get things out the door quicker. Anne Thomas Manes got up from the audience at this point and reminded the panel of the executive committee and that J2SE and J2EE are the summation of a lot of JSRs. The executive committee has 16 vendor representatives from Sun and various other companies. Knutson humourously referred to this committee as being comprised of "Sun and Sun's friends."

This paved the way to a discussion on whether or not Sun's executive committee is the best way to move Java forward and whether there is an adequate 'speed of delivery' in the community process. The OMG was cited as not having demonstrated sufficient speed in defining standards.

Cedric commented that the JCP is a powerful weapon and it is the reason we are here today. It's obviously not perfect but it has helped us agains Microsoft.

Floyd changed the subject of the discussion to AOP. He asked the panel whether it would 'replace or displace' EJB.

Rod didn't think it would do either. AOP is a generalization and an enhancement of EJB technology. If we can 'do it right', it might just replace EJB, making EJB a legacy technology. Will EJB still be used 2 years from now? He thinks so.

Crupi said that somebody once asked him when XML came out, whether it would replace Java. The audience laughed. Cedric said that whether we decide to embrace it slowly or quickly, it will definitely be interesting to see the effect AOP has on Java.

Mike Cannon-Brookes asked about tools standardization, and whether disparate tools hurt us against Microsoft.

There was some talk on the religious wars between Vi and Emacs users. Cedric thinks that it's important for there to be tools standards. Rick Ross commented on how Microsoft can very easily make all their applications work with .NET and that because of their strong customer relations, they constantly turn to them to iteratively improve their products and they back up their marketing with customer testimonials.

Rod said that we cannot compete with Microsfot on tools because J2EE tools need to work with too many disparate application servers.

Crupi elaborates his point as
the panel attentively listens

The tools discussion continued as panelists and the audience members imbibed more beer; however, the discussion quickly degenerated into a debate about using JBoss and whether or not their support is dependable.

Floyd brought the panel back into focus by asking them whether we should resign J2EE due to its complexity.

"Revolution or reform?" Rod asked. He thinks we need reform. We've acheived a lot in J2EE but we should acknowledge certain suboptimal solutions such as entity beans. We need to focus on simplification. He thinks that Java is simple at solving problems but J2EE is not.

Cedric commented on how we've learned a lot of lessons in the past 2 - 3 years and that there are certain mistakes in the spec such as entity beans and stateful session beans. He emphasized the importance of JMS and JCA and as a tribute to simplicity, he went as far as praising ADO.NET because it provides such simple database access.

Mike Burba emphasized the importance of process-based application development. Simple things are done more simply in a declarative manner and metadata is a step towards ease of development.

Crupi talked about how EJB 3.0 will reduce complexity but that people need to be more vocal about the new iteration of the Java Community Process - version 2.6.

The panel ended on a positive note with a general sense of agreement that J2EE is in fact headed in the right direction.


Saturday Night Beer Tent Party

Mike Cannon-Brookes, Cedric Beust, Vincent Massol,
John Crupi and others in heated discussion

After the Futures panel ended around 10:00, the crowd descended into some hard beer drinking and technology discussion. Over 600 pints of Sam Adams beer was imbibed at the event. TSS attendees were a hardy group, having been attending symposium sessions since 9am that morning - over 13 hours of content. But that didn't stop a group of about 40 from staying up until well over midnight drinking what was left of the beer and discussing technology. It was definitly a great experience and a unique assembly of people in the industry.

Floyd and Cedric enjoy a beer while listening
in on Crupi and the open source guys

The discussions of the night were varied - at one table you had Patterns Author/Sun Engineer John Crupi, Giga Research VP Randy Heffner in a heated discussion over the JCP process with open source diehards Gavin King (Hibernate), Mike Cannon-Brookes (OpenSymphony), and others. In another group, Rod Johnson, Juergen Holler, Jim Knutson (Web Services JSR 109 spec lead and IBM's father of EJB) and others were debating whether AOP will take over the enterprise world. A bet was made that everyone would meet at the TheServerSide Symposium in exactly two years time to see whether the industry will have replaced EJB with a POJO's and a good enterprise AOP framework. Throughout all this, Floyd Marinescu kept himself busy walking around with pitchers of beer making sure that everyone's plastic cups were full.


Day 3


Introduction to Agile Modeling

According to Scott Ambler, author of Agile Modeling and Elements of UML Style, the current methods are not working. We've got a 65-80% failure rate for IT projects. Our industry suffers from over-specialization, and as developers we need to make sure we're not too focused on development only. We need to understand the business to be effective. Modeling and documentation doesn't have to be dysfunctional and doesn't have to be a burden on the development process. It's important to realize that models are not documents and that models are an important part of any process, including XP.

Scott asks, what is Agile Modeling? Agile modeling is just good enough. Once your models are good enough, move on. One of the biggest problems with modeling is that we get too specific in our models. It's important to realize that everything works on a whiteboard, and the longer we go without feedback from implementing our models the higher the risk. Agile Modeling is not a full process, it's a part of the process which helps to make your modeling activities explicit, both to developers and management.

Scott Ambler, Vincent Massol and
others at Rick Ross' keynote

Agile models are just good enough to fulfill their purpose. How can you get home if your street is not on your map? This is the same level of absurdity we get when we're told that we have to have every detail of our software project detailed out. Agile models can be as simple as CRC cards on 3x5 index cards or post-it notes on a flipchart. These are very simple tools, and you can explain them to the users and bring them into the process. The users are the ones who understand the product we're building, so having a big tool is nice, but are the users going to be able to use it or understand it? We need to make our models inclusive to allow the users to remain involved.

What are the values of Agile Modeling? Communication, simplicity, feedback, courage, and humility. Models should communicate easily with users, and the more simple the models the easier the concepts are to communicate. The longer we go with modeling processes and without getting feedback, the more at risk we are for developing a system they don't want.

The principles of Agile Modeling center around getting work done. One of the key principles is that software is your primary goal. If you're not there to write code, then what are you doing? Enabling the next effort is your secondary goal, because someone has to maintain and extend this system. Agile Modeling should strive to maximize stakeholder involvement, because it's their system and their money. Stakeholders need to be given the choice and the information necessary to make that choice about which deliverables, including models and documentation, are developed. We also need to embrace change and realize that we can't get things right the first time. We need multiple models to show different abstractions of the system, and we need to realize that the content of the models is more important than the notation. As soon as we get to a point where we can start writing code, we're done, and the model has served its purpose. We can iterate between the code and the model if we need more detail, and we can iterate to another artifact if you're stuck on a model. We need to choose the simplest tools possible, including the models. Use the models that elaborate the issues, whether they're part of UML or anything that communicates the design. UML defines nothing for data models and nothing for UI design. Our models are only valuable as long as they're a source for us to code from. If a model or a process is not adding value, why should we develop it or do it? We should only go back to update a model when it's causing problems and you're still working from it.

Agile Model Driven Development (AMDD) involves starting with iteration 0 to gather some initial requirements and do some initial architectural requirements. We then need to prioritize our most valuable features and develop those first, repeating the process again and again. Test Driven Development goes hand-in-hand here, developing the test for a feature first, then code until it passes.

I asked Scott about using agile modeling with offshore development. Offshore outsourcers are often resistant to agile processes because their business model is about putting 50 people on a project and cranking it out from a spec. Agile processes can be used with offshore development, but it's more difficult because communication is more difficult and people are flying back and forth. You often end up with the people on this side just developing documentation to feed the development team offshore.

Someone asked, what if you can't get out of developing the documentation and go to the meetings? Scott wrote an article for Software Development about blockers. One person on the team attends the meetings and builds the required documentation to keep the rest of the team working.

Your primary goal is to build software. If you don't have active stakeholder support and involvement, you're going to fail and you should cancel the project. Your developers have to be responsible enough to know their own limitations and to work with others who have the skills needed to get things done.

Check out http://www.agilemodeling.com to learn more about Agile Modeling.


Next Generation of Applying J2EE Patterns

John Crupi, Chief Java Architect of the Sun Java Center, gave this talk in which the predominant theme was that 'Code is King', and that design patterns should guide the code and not necessarily generate it with the click of a button. He struck a chord with developers who religiously code in Vi or Emacs; people don't want to change their IDEs and they shouldn't have to. There needs to be a return to Emacs-style interfaces, except that the editor needs to become smarter using Code Helpers. Code, which he described as the canvas, drives development and design should merely help or navigate.

The J2EE patterns are patterns on paper. At the end of the day, they have to be implemented. The pBench tool is a pattern based framework that can be used to answer questions like "Will this design perform and scale?" You run this in your app server, then send a schema defining the patterns you want to use together and test the throughput. You define the patterns used in a very high-level XML file which gives some deployment characteristics of the pattern and components. The process of defining the patterns and interactions encourages performance based design. You can model your scenario, and benchmark different designs against it.

Crupi explains the pBench tool
to a capacity crowd

Customers are confused about how to implement patterns and avoid common mistakes. Unfortunately, tools are very primitive in patterns support and many developers still use basic text editors as their development environment. Different developers work best in different tools, so the code must be the repository. The code and the design must be connected. The code drives, the design navigates.

Tools are starting to help developers and are getting smarter. The first generation was just pop-up completion. Suggesting changes or actions are the next generation of code helpers. Neither of these are design-centric. The third generation is CADI - Code and Design Infusion.

How can we integrate design and code? The design should help you identify where you are and visualize the patterns you're implementing. It needs to detect patterns in code, transform from one implementation of a pattern to another, visualize the patterns to provide a graphical metaphor, and refactor automatically. The pattern is the working design abstraction. Graphical elements can be used to highlight and represent the detected patterns in the code. What about bad practice detection? What if the IDE could highlight things like calling an Entity bean from the presentation layer? Could we provide (Joshua) Bloch in a Box? Not only could the bad practice be highlighted, but a refactoring could be suggested. In this case, it would suggest an "Introduce Session Facade" refactoring.

What if you had an architectural style guide? We're used to defining requirements, but how do we verify that our implementation is adhering to our architectural goals? What if we could provide an architectural design dashboard that shows where we are in our code in terms of the architectural patterns? The tool could analyze the implementation of the patterns and constrain what patterns can be used. Progressive refactoring could show what the refactoring changes would be.

Crupi walks the audience through
a patterns flowchart

The reality of this vision is the Jackpot project at Sun Labs. The mission is to analyze, transform, and visualize code. James Gosling, Michael Van De Vanter, Tom Ball, and Tim Prinzig are working on this. This is a technology, not a product. It sucks up your whole source tree, builds an Abstract Syntax Tree, and uses in-memory detection and transformation against the AST. The problem is comments, since they aren't part of the AST.

So how do we detect a pattern? How do we detect bad practices? It needs to be able to analyze the code without depending on annotations and possibly annotate the code. Can we ever be 100% sure of a pattern matching, or is there a minimum level to reach that's good enough?

How can we do the hard refactorings? How can we move a method? There are many issues, including the constructors and properties of the source and target classes. These kinds of issues are being addressed through the Jackpot project.


Aspect Oriented Java Development

Bob Lee, jAdvise AOP framework founder and co-author of Bitter EJB presented this session. Aspect oriented development is a way to modularize cross cutting aspects of a system. It enhances, rather than replaces OOP. In addition to the core OO concepts, AOP adds the ideas of interceptors, which can add functionality to a point in your code, introductions or mixins, which can add functionality and/or state to your existing objects, and pointcuts, which allow you to define where interceptors and introductions are applied. Another concept often used with AOP is the idea of metadata attributes, which provide runtime information about your class which are outside the class itself and give your runtime system hints on how to treat your classes. The canonical examples of AOP are logging and monitoring of your code without having to instrument it in your code.

The observer pattern is a good example of how AOP can make our classes simpler. The observer pattern is often implemented by either extending a base Observable class or by delegating to an Observable instance. In general though, your code shouldn't care that it's being observed. By using introductions, we can add the Observable behavior to any class and using interceptors, we can have it call the Observers.

The first AOP Java implementation was AspectJ. It uses a custom language to define your aspects above and beyond Java. In the last eight months, starting with Rickard Oberg, AOP frameworks in Java have begun springing up. In these frameworks, rather than the custom language extensions, pointcuts are defined in Java, sometimes configured via XML, and they use either dynamic proxies, which allow calls to an Interface to be intercepted, or byte-code modification, which can instrument your class bytecodes with functionality to intercept any method or field access. When bytecode modification is used, the class bytecodes can be modified during the classloading process or during the build. Bytecode modification can be done through a couple of projects, BCEL and Javassist.

Developers from all around smile for a pic

Bob Lee's AOP framework is jAdvise, and it uses bytecode modification with Javassist. It can use either runtime classloading or compile time byte code modification. It runs the invocation down a chain of interceptors and finally calls the original implementation of the advised method. The Aspect Interface has one method, invoke(), which takes an ActionInvocation. The ActionInvocation allows you to get the advised object, the method, the method arguments, and allows you to invoke the next interceptor (or the original object if you've reached the last interceptor). The Advisor allows you to set the array of interceptors to be applied. It delegates directly to the original when no interceptors are applied with no object creation. Overhead is comparable to dynamic proxies when interceptors are present. Bob showed an interesting example of dynamically creating a sequence diagram by adding two interceptors, one to log calls to methods in a format for a GUI tool to generate the method calls in a sequence diagram, and one to slow the method calls down in order to allow us to see each method call as it happens.

Other frameworks exist as well. JAC is going the route of creating a complete AOP-based application server. CGLIB allows proxying any object, not only Interfaces. Nanning is developed by Jon Tirsen and is based on dynamic proxies. It provides interceptors and introductions and is either XML or programmatically configurable. Aspectwerkz by Jonas Boner uses bytecode modification and runtime attributes and looks very cool. Jboss is building a bytecode modification AOP framework into their App server. JFluid is a new project that allows you to instrument code that might need to be loaded before your AOP code is loaded. Rod Johnson began a movement to standardize the AOP frameworks called the AOP Alliance.

Bob is interested in how he can implement this in his existing OO work. How to define pointcuts is still open for debate. AspectJ uses their own language extensions to define the pointcuts. Nanning uses programmatic configuration of pointcuts. Bob is using an Aspect Decorator with interceptors on domain object factories and collection / array classes to instrument any class being created. In order to configure this, he has an interceptor that is applied to the interceptor that instruments the classes which decides which interceptors to add to a class. For this configuration, he is looking at using BeanShell for a powerful configuration facility.

Bob feels that the only places he sees value in interception is in method invocations. Construction interception can be handled if you use factory methods. Field interception has problems with performance, especially since field access is atomic and fast. Also, there's reflective field access, and you'd have to instrument all of your classes to do this. Cedric points out that it is powerful to be able to pointcut on error throwing and class casting. You might want to "soften" exceptions to runtime exceptions instead of checked exceptions because, for instance, you've instrumented your POJO to be remote.


Conclusion

Overall it seemed that TheServerSide Symposium was a great success. The most common complaint attendee's had was that it was difficult to choose which talks to go to, because they were all so good. Another rare compliment that was heard was that not only were the attendees remarking at how qualified the speakers where, but the speakers were commenting on how technical and 'on the edge' the attendee's were. This is probably because the conference was TheServerSide Symposium, and many of the J2EE experts from the website were there. The thing that most people liked about the show (which is repeated many times in the blog links below) is that people really like the community aspect of the conference. It was relatively small (about 330 people), and people liked that they had a chance to talk to the speakers personally and get to know them. It was a rare combination of J2EE people. TSS had invited key people from both the open source world, the commerical world, and also people crafting the specs themselves to the event - resulting in many new relationships made and interesting debates had.


Blogs @ TheServerSide Symposium

Day by day coverage by Cameron Purdy
http://www.freeroller.net/page/cpurdy/20030629

"TheServerSide Symposium - a great, focused show", by LinuxWorld Chief Editor Kevin Bedell
http://www.oreillynet.com/cs/weblog/view/wlg/3418

TheServerSide Symposium, my report - BEA's Cedric Beust
http://jroller.com/page/cbeust/20030630

Open Sympony's Mike Cannon-Brookes: TheServerSide Symposium wrap up and downloads
http://blogs.atlassian.com/rebelutionary/archives/000200.html

Bob Lee's Thoughts on the Symposium
http://crazybob.org/roller/page/crazybob/20030707

Erik Hatcher "TheServerSide Symposium debriefing"
http://weblogs.java.net/pub/wlg/219

Dion Almaer "TSS Symposium chugging away"
http://www.oreillynet.com/pub/wlg/3400

Craig Pfeifers TSS Symposium Photo Album
http://photos.autenroad.com/album09

Rick Hightower on TheServerSide Symposium
http://rickhightower.blogspot.com/2003_06_01_rickhightower_archive.html#105695040425291688

Merrick Schincariol
Day1 : http://jroller.com/page/mschinc/20030628
Day 2&3: http://jroller.com/page/mschinc/20030630

Morten Wilken Symposium Writeup
http://www.freeroller.net/page/wilken/20030702#theserversidesymposium_writeup

Rick Ross
http://www.javalobby.org/members/nl/jlnews_20030701.html

Aspect-Oriented Programming on JBoss 4.0, Kevin Bedell
http://www.oreillynet.com/cs/weblog/view/wlg/3419

Meta Data vs. AOP
http://jroller.com/page/mschinc/

Related Resources