Can Java EE Deliver The Asynchronous Web?

In Part 1 of this article we explored the concepts of the Asynchronous Web and how it can revolutionize the way we interact with web applications, and ultimately each other. We gained an understanding of the low-level long polling mechanism, and explored some of the intricacies of managing long-lived HTTP connections associated with it. We also determined that the Java EE specification process is lagging in both Servlet-based Asynchronous Request Processing (ARP), and at the presentation layer programming model.

In Part 1 of this article we explored the concepts of the Asynchronous Web and how it can revolutionize the way we interact with web applications, and ultimately each other.  We gained an understanding of the low-level long polling mechanism, and explored some of the intricacies of managing long-lived HTTP connections associated with it.  We also determined that the Java EE specification process is lagging in both Servlet-based Asynchronous Request Processing (ARP), and at the presentation layer programming model.  We posed a number of questions that need answers, and we will now examine an approach to delivering Asynchronous Web capabilities through extensions to existing Java EE technologies.  While the following discussion is conceptual in nature, concrete implementations of these concepts have been achieved in the ICEfaces open source project.

Let's start with the programming model

Regardless of the intricacies of the underlying mechanism that supports asynchronous push, the programming model that the developer is exposed to should be intuitive and natural to use, or it won't be used at all.  JavaServer Faces Technology (JSF) is the most natural place to start, as it provides a standards-based request processing lifecycle as a foundation for the programing model.   This lifecycle begins with a request, from which values are applied and validated, the model is updated, the application is invoked, and finally the response is rendered.  This lifecycle is client initiated, but for push we need to be able to trigger an update based on some server-initiated mechanism, as illustrated below. 

Server-initiated Rendering

From the developers perspective, you simply want trigger points in your application to request a render when it is necessary to push new presentation and rely on the underlying mechanism to handle the details. We will now examine some of these details.

Incremental updates required

The first thing that must be avoided when pushing updates is doing a full page refresh with every server-initiated render request, as this would be completely disruptive to the user experience.  While the JSF 2.0 specification includes incremental rendering capabilities, stock JSF 1.2 provides no such mechanism, so an extension is required to achieve it.  Ajax techniques have been used successfully to provide incremental updates, and can be combined with server-initiated push rendering to achieve the desired capabilities.  In some cases additional markup in the page is used to indicate what elements of the page need to be updated under various conditions, but this can dramatically increase the burden on the developer to achieve efficient and effect incremental page updates.  ICEfaces provides an incremental update mechanism based on a technique call Direct-to-DOM rendering where the framework determines precisely the set of incremental changes required for an update.  The major advantage of this approach is that no developer intervention is required to achieve proper incremental rendering of the push updates.  The basic push mechanism with incremental updates is illustrated below.

Push with Incremental Update

Need a request context

While conceptually straightforward, forcing the RenderResponse phase to run outside a normal request-initiated lifecycle poses a challenge.  Specifically, the JSF lifecycle maintains a FacesContext object containing the state associated with the current request, and it is necessary to create a synthetic request context that the JSF lifecycle can execute with.  This synthetic context can be created programmatically, as it is in ICEfaces 1.x, but constructing the necessary state is quite involved.  Integration with other middleware like Seam or Spring Web Flow introduces further complications, as those technologies expect specific state in the request context that must also be synthesized.  An alternate approach is to have the push mechanism alert the client, using the blocking connection, and then have the client make a request to fetch the updates.  This is somewhat less efficient as it requires an additional request, but allows the JSF lifecycle to build the request context naturally.  This mechanism, which is illustrated below, is implemented in ICEfaces 2.0.


Avoid Rendering Mayhem

One can imagine a complex system with multiple trigger points for push generating render requests across large numbers of clients.  The JSF RenderResponse phase is computationally the most expensive part of the lifecycle, so scalability of the implementation will be compromised when excessive rendering is performed.  It is necessary to strictly manage the rendering process, coalescing render requests and maximizing throughput.  Session group management is also an important aspect of managing the push rendering mechanism.  Typically, groups of clients that share state will be impacted by the same push triggers.  Being able to organize these clients into groups, and have trigger points generate render requests across groups can greatly simplify the developer's task of implementing trigger points. 

Browser connection sharing

As discussed in Part 1 of this article, maintaining separate blocking connections for multiple views onto the same web application or multiple applications deployed to the same DNS domain, so it is necessary to share a single connection across the views.  In order to share the blocking connection, some global state must be available to each view (browser window or tab) in the same domain.  Cookies provide a reliable cross-browser mechanism for state sharing, and are leveraged in the solution described here. Various strategies could be considered, but one of the simplest mechanisms is to assign a master view that handles the blocking connection for all views.  The first view instance will set a cookie indicating that it is the master, and all incoming push update responses will be handled by this view.  A second cookie can then be used to provide view-specific updates, which slave views can then poll for updates and apply them as they become available.  The master/slave logic must support handing off control when the current master view is disposed of.  This can also be achieved with a cookie that is polled by all slaves to determine when a new master is required.  When the master view is destroyed the cookie can be set to indicate so, and another view can pick up as master.

Server connection sharing

We will now turn our attention to the server side of the blocking connection.  Within a single domain you could have multiple applications deployed, or multiple portlets spanning multiple web applications, and will need some sort of central management of the shared blocking connection.  A Servlet can be used to handle the blocking connection and communicate with the various applications and portlets to receive push updates and pass them back to the browser over the blocking connection.  In ICEfaces, the Push Server does precisely this.

Add a little ARP

Our architecture now includes a central Push Server to manage the blocking connection, but as we discussed in Part 1 of this article, the standard Servlet implementation will exhibit thread-level scalability issues, as each session requires a thread to manage the blocking connection.  The Servlet 3.0 specification includes provisions for ARP, so a ubiquitous solution is on the horizon, but in the here and now we have to deal with proprietary solutions across the spectrum of applications servers.  This means that you need an environment-specific implementations of the Push Server, or you need a mechanism to automatically detect native ARP support and adapt accordingly. The following open source application servers provide ARP APIs:

  • Apache Tomcat 6 Comet Processor
  • Sun Glassfish Grizzly Plugin
  • Jetty 6 Continuations

Bring it all together

Now bring all of these details together and we have a robust architecture that handles all of the intricacies associated with push, as illustrated below. Couple this with a straightforward programming model based on JSF, and you are well positioned to deliver on the promises of the Asynchronous Web today.

Push Server

Back to the original question

Can Java EE deliver the Asynchronous Web?  Throughout the preceding discussion we have managed to answer the question affirmatively, but it is clear that the Java EE standards are lagging in a number of areas.  We have also seen that a robust solution tends to be rather involved, so look to industry-proven approaches that allow you to focus on application development, not low-level push infrastructure development.  After all, it is the applications themselves that will revolutionize the web.  Also, look to the evolving Java EE standards activities where pressure is being applied to incorporate asynchronous push capabilities.  As push becomes table stakes in modern web applications, the standards will catch up - just maybe not fast enough for most of us.

Dig Deeper on JSRs and APIs

Start the conversation

Send me notifications when other members comment.

By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy

Please create a username to comment.