Legacy web applications are synchronous in nature. The user interacts with the web interface presented in the browser,...
the browser makes requests back to the server based on that user interaction, and the server responds to those requests with new presentation for the user - fundamentally a synchronous process. This means that the presentation delivered to the user represents a snapshot in time of what is a dynamic system. That snapshot becomes stale in between user interactions and does not necessarily provide an accurate view onto the current state of the system. Even when you bring Ajax techniques into the equation this synchronous process is unchanged. While the use of XmlHttpRequest and Ajax techniques facilitates a more fine-grained interaction model than a full page refresh, the requests are still generated based on user interaction, so the process remains synchronous, and the potential for a stale view onto the system persists.
The Asynchronous Web is fundamentally different, and that difference revolutionizes how web applications behave. In the Asynchronous Web it is possible to deliver spontaneous presentation changes to the user as the state of a dynamic system changes, without the need for the user to interact with the interface. The advantages are obvious as we can now maintain an accurate view onto the system for the user. Examples are numerous, and include any system providing a view onto a dynamic system, such as a stock portfolio, an inventory, or a day timer/calendar. When you have multiple users interacting with the same system, the interactions of one user can spontaneously impact what other users see, thus creating a truly collaborative system - the essence of what Web 2.0 promises. Again, examples are numerous, including a simple chat client, and an eBay bidding system. Ultimately, most systems that humans interact with are collaborative in nature, so the web interface onto those systems should be too.
How does the Asynchronous Web work?
To achieve the Asynchronous Web we need to be able to send responses back to the browser spontaneously, but how can this be achieved within the confines of the HTTP protocol? We cannot send a response to a non-existent request, so it is necessary to manipulate the request/response mechanism to achieve the desired effect. The most straight forward way is with a basic polling mechanism. Send requests on a regular basis, and give the system continuous opportunities to update the presentation. This technique, which is illustrated below, is not ideal as there is no ideal polling interval. There is a necessary trade off between timely updates and chattiness of the system. As illustrated, it is possible for multiple events to occur between polls, but it is also possible to have no events occur. In the final analysis, polling is not a truly asynchronous mechanism.
The next option to consider is HTTP streaming, where multiple responses can be sent to a single request, as illustrated below. This is an efficient mechanism, but unfortunately is not ubiquitously acceptable across all proxy/firewall configurations, making it unsuitable for general purpose deployments.
The last option to consider is HTTP long polling, where the request is made in anticipation of a future response, but that response is blocked until some event occurs that triggers its fulfillment. This mechanism, which is illustrated below, is nearly as efficient as streaming and is completely compatible with proxy/firewall configurations as it is indistinguishable from a slow responding server.
So long polling provides a viable mechanism for supporting the Asynchronous Web, and is in fact the mechanism used in industry implementations like Ajax Push and Comet. While the mechanism is relatively simple, the ramifications of holding these blocking requests indefinitely are not. We will now examine these in more detail beginning with the Servlet.
Normal request processing in the Servlet requires a thread per request, so if we are going to block these threads indefinitely, and we need one request per client, thread exhaustion in the application server can occur very quickly. The Java EE specification is poised to address this problem in the Servlet 3.0 specification (JSR 315), with the introduction of asynchronous request processing (ARP) that will be well-suited to the long polling mechanism. In the meanwhile, industry solutions have outpaced the standards process, with a variety of ARP solutions including:
- Tomcat 6 Comet Processor
- Glassfish Grizzly Connector
- Jetty Continuations
- WebLogic Future Response Servlet
- WebSphere Asynchronous Request Dispatcher
All of these mechanisms are well-suited to long polling, but they are all different, which means that any solution will have to be customized to the target deployment environment. For application servers not on this list, we would have to work around the Servlet completely to implement scalable long polling. So from the Servlet perspective, supporting the Asynchronous Web is doable, but until the standards catch up, a ubiquitous solution cannot exist.
What client-side processing is required?
Long polling needs to be driven from the client, making some amount of client-side processing a requirement for supporting the Asynchronous Web. At a minimum, we need a mechanism to manage the blocking requests associated with the long polling, and a mechanism to update the presentation when responses occur, as illustrated below.
The process goes as follows:
- The initial blocking request is sent using XmlHttpRequest to initiate the long polling sequence.
- Some state change in the application generates a response containing presentation updates.
- The generated response is delivered to the client.
- Loop back to the 0 state where another blocking request is generated.
While the process looks straight forward, there are a couple of intricacies that must be dealt with. To begin with, the long polling mechanism may not be robust under all network conditions, and it is possible that the blocking connection will be dropped. In order to make the mechanism robust, some sort of keep alive strategy is required. A standard heartbeating mechanism will suffice.
A more complex problem occurs with regard to browser connection limits, and particularly with IE which has a limit of 2 connections to the same domain. The long polling mechanism requires one of these connections, and another is required to handle the normal request processing associated with user integration, so we have just enough, right? Well, we have just enough for the simplest case of a single browser view onto the web application, but what about opening multiple browser tabs onto the same application, or a portal environment where multiple portlets in the page need access to the long polling mechanism? It is not possible for each of these views to initiate their own long polling sequence, as the mechanism will fail once the browser connection limit is exceeded. It is necessary to manage a single blocking connection, and share it between multiple browser views or portlets as the case may be.
So from the client perspective, again, we see that supporting the Asynchronous Web is doable, but we will have to apply Ajax techniques and deal with several intricacies related to maintaining the necessary blocking connection associated with long polling.
What about server connection sharing?
While we are on the topic of connection management related to multiple views and portals, lets return to the server side of the problem and revisit the blocking connection, in particular for portal environments. We don't have a problem provided that we are dealing with a single web application, since we have a single blocking connection to manage, but in a portal environment we face the possibility that portlets can be deployed from multiple web applications. In this case where do we manage the single blocking connection that the client browser must establish? None of the existing portal engines considers this need for a shared ARP mechanism, so again we will be forced outside of the current Java EE specification, and industry supported portal engine implementations. We will need additional server-side infrastructure to provide a single shared connection for handling the long polling mechanism among portal applications, and will have to leverage IPC via JMS or some similar mechanism to do so.
And finally, what about a programming model for all this?
Up until now we have been primarily discussing the plumbing required to support the Asynchronous Web, but plumbing does not result in applications, and ultimately applications are what will revolutionize our use of the Web. From the previous discussion, we already know that some level of client-side Ajax programming is required, but this is well outside the Java EE standards for the presentation level, namely JSP and JSF. Are we going to be forced to move to a client-centric programing model to build Asynchronous Web applications, or can we leverage existing standards? JSF will be the predominate programming model for Java EE going forward, so it represents the most logical place to address the problem. We know that the existing JSF 1.2 specification does not even address Ajax capabilities, so not much there to draw on, and while the JSF 2.0 specification introduces some Ajax mechanisms, it does not extend the programming model beyond the normal request/response lifecycle. So once again, we will be forced outside the Java EE specification to adopt a programing model for Asynchronous Web applications.
More questions than answers?
So we get the sense that the Asynchronous Web is definitely possible, but the Java EE standards will do little to provide comprehensive support for it in the foreseeable future. We have a grasp of the low-level pluming required, and understand a number of intricacies associated with that plumbing, but have not determined an appropriate programing model for building actual applications. Frankly, we are left with more questions than answers at this point.
In the second part of this article well will reverse that trend and provide some concrete answers to these questions. In particular, we will focus on JSF as programming model, and examine techniques for extending JSF to illustrate how the Asynchronous Web can be supported using Java EE-based techniques.