TheServerSide at JavaOne 2003 - Day 4

The final day looked at growth areas for software and compared .NET to Sun and how the two softwares could improve in the future.

Summary| Day 1 | Day 2 | Day 3 | Day 4

June 2003

Discuss this Article

The final day of JavaOne commenced with a keynote featuring Scott McNealy and Jonathan Schwartz who looked at growth areas for software and compared .NET to Sun. The keynote was followed by a demo and a panel on mobile technology. Technical sessions covered were Standardizing Content Management (JSR 170), New Concurrency Utilities, a session on JXTA and JCA, Using URL Tokenization for Managing Distribution Content Services, a Case Study on Capital One's high volume system, and Web Service Versioning and Deprecation.

Keynote: Nigerian Rhythms, Growth Areas for Software (and Sun), the 'Dukies', Demo Debacle, Wireless-Mania

The Friday morning (day 4) keynote kicked off with the exotic rhythms of Ketoja, a Nigerian-based band dressed in colourful robes with an assortment of interesting instruments. Who know's what they were thinking, as they played their highly danceable music and fast rhythms to the information-hungry and largely static bunch of developers waiting patiently for McNealy to enter the stage.

Eventually, Jonathan Schwartz and Scott McNealy came on stage, and after thanking the band, McNealy proceeded to entertain developers with great lines such as "It's so last millenium to write to the OS, now we write to the Java OS Layer".

"Every party has a pooper" said McNealy, referring to a San Jose Mercury News article entitled "Platform makes gains but does Sun"? In mutterings and side discussions throughout the conference, people were wondering what Sun's fate is. Yes, Java is doing great, but the Solaris OS and hardware platform don't seem to have a lot of growth potential. McNealy countered these sentiments with a sincere "don't worry, we're doing fine", going on to say that "were more bullish on our own personal outlook BECAUSE of Java's success".

Following the pep talk, McNealy proceeded to discuss some growth area's Sun see's for itself and also for the general industry... Plastic Wrapped vs. Shrink wrapped software was one of them. McNealy stated that software is increasingly being sold less as 'on the shelf in shrink wrapped boxes', and more as 'plastic wrapped hardware' (another word for appliances). This is not a new point, as Sun has been pushing it's Sun Ray appliances for some time.

The next interesting (and more controversial) growth area McNealy addressed was the area of "rack wrapped" software, a term that he used to refer to third party hosted software, such as online mail,, and things like that. McNealy claimed that rack wrapped software is attacking the modern data center, and that the future of computing will lie increasingly with these types of systems.

McNealy then started into a "report card" session, which compared .NET and Java. This felt like something which some marketing underlings had given him. He didn't seem to really be into it, and kept saying "aren't we done?" and "there are too many of these". As you would guess, Sun gave itself A's and a few B's (need some work), and .NET got C's, F's, and no shows. These were just easy jabs, and perhaps something he really didn't need to do at all.

There were a couple of funny comments like: "As opposed to a certain company, I didn't have to send an email to my employees to tell them security is important"

This year was the first for the "Dukies". Duke (or as Scott calls it: The Tooth) is that weird mascot. This really felt like a Dave Letterman's Top Ten list. Awards were given for various (and random) prizes, such as "Best Java code outside of the earth": Mars Rover. By far the best area was "Java in the city". It was difficult to understand this section, until we found out that HBO had won. Scott said that HBO didn't let him say "Sex in the city", so from now on, "where you see the word Java, replace it with Sex". I felt sorry for the female Sun employees who were wearing Sun t-shirts. As with the report cards, this one dragged on for a bit. It was similar to watching a Saturday Night Live sketch... you just wanted it to end and get to the next one!

Next was one of the funniest demos ever given. The CEO of was brought on stage, after we were told that they are the fastest growing company in the valley (???). The CEO went to start on his demo, but wasn't getting "the screen" to come up that he wanted. He started to sweat, and then a techie came to his rescue. Someone slowly came on stage, walked over to the machine, and tool the mouse. He looked at the audience, at the CEO, and then clicked "Login". This got one of the largest cheers of the week! Nice of the CEO to HAVE A CLUE about his product :) Now he got to "his screen" and was rambling, talking about applets, Web services, and many other buzzwords. The result that we could see was just a bunch of portlets which didn't look cool in the slightest. Are we missing the point of this one? It was also hard to tell if it was the portlets which he kept refering to as "applets"... as they just looked like HTML tables to me!

A panel was brought up to finish the keynote. It was a group of mobile geeks (Vodafone, Nokia, Motorolla) with one "cool" guy, not in black. He was from the music biz (Warner bros) and it wasn't hard to guess. The thing about these panels is that they look totally acted out. Everyone was on cue, and there wasn't anything really interesting said. Once again, the Vodafone honcho told the audience to start coding mobile games as it would make us millions. We know mobile is big. We know it is growing. It isn't here yet though, and in fact, it is almost painful to be on the networks in the US compared to Europe and Japan. Perhaps Sun should preach to the mobile market more when they can get full wireless access into the Moscone too. We should be able to go to the bathroom and still be online at that place.

Technical session on JSR 170 (javax.cms.*) - Standardizing Content Management

About 250 people showed up for the talk on the JCP work being done to produce a standard API on to any content management system - and rightfully so. JSR 170, if it succeeds, could impact the content management industry as heavily as J2EE impacted the application server industry. The room was filled with people who had their share of proprietary CMS packages and were eager to see standards, evidenced by the fact that 90% of the room put up their hands when the speakers asked if they have programmed to CMS packages before.

Content management is the worst standardized system in the world. There are over 200 active content management vendors and innumerable custom devoloped ones. The JSR expert group has over 50 members consisting of major content, document and web content vendors, as well as portal vendors, appserver vendors, integration consultants, and open source people.

To prove the need for CMS standards, the speakers showed a simple usecase of a webpage data access code - there were dozens of completely different ways of writing this code based on the portal product you were using. What a nightmare!

JSR 170 intends to create one API that interacts with a CMS, giving customers all the benefits of a standard (lower learning curve, swappable CMS vendors, etc). The speakers compared what a CMS API would do to CMS vendors to what SQL did for databases and J2EE for appservers.

Like the JDBC API's, JSR 170 is seeking to define 2 levels of compliance. The first level is a simpler level intended to be achievable for all repositories, and is geared to be easy for vendors to implement. Level one standarizes the notion of a data repository, how one connects to it via JNDI, and how one authenticates to it, and a simple API for CRUD access. Some examples of the current working API were shown:

 getProperty(String path), addProperty(String path, String value)

Level 2 compliance is an extension of level 1 that exposes more extensive repository functionality, such as versioning, transactions, object locking, a standard programming model, event monitoring, content packaging, access control, etc. All these features are intended (in the speakers words) to further differentiate "data from content".

As an example demo, the speakers showed a recreated version of the JCP.ORG using the cuurent community draft of the JSR 170. The page looked pretty simple. They had simple getters/setters for properties. They then logged into the JSR 170 RI mgmt interface, showing a hiearchical tree view of the jcp site, clicking on a page representing jsr 170. The UI showed all the logical elements *ie: title, abstract, diff sections) of the page as nodes on the JSR 170 "tree".

They then were able to backup up the JSR 170 node to a file, and reimported it into an empty content repository (using the JSR 170 RI). And low and behold, the import/export was flawless. The presenters made a point that a similar tactic would not have been possible without JSR 170; it's simply too expensive to change repository vendors.

The standard will be in community review in the next couple of months, putting it through public at the end of the year. This is a very exciting standard. A lot of J2EE apps we write today are not really dynamic enough to warrant being written in J2EE - they are really designed for a CMS. Once JSR 170 comes out and there is decent industry support, we may find a lot more websites being written using a commerical or open source CMS package instead of homegrown JSPs.

New Concurrency Utilities

JSR-166 represents a major improvement to Java's concurrency capabilities by introducing the java.util.concurrent package, potentially for inclusion in J2SE 1.5. This package is based on Doug Lea's widely adopted concurrency package, and includes some battle-tested, industrial-strength and peer reviewed concurrency utilities.

What's fascinating about these libraries is that they rarely, if ever, use the actual Java native synchronization mechanisms. Instead, the library includes a native-code module to use a hardware-specific native primitive for locking. This leads to much better scalability and performance than native Java concurrency.

Examples of tools in this kit include:

  • Executors, a framework for asynchronous invocation, eliminates the need to spin your own Threads off. This enables thread pools, cancellation and shutdown of threads, etc.

  • Various data structures such as PriorityQueue, a fast thread safe non-blocking LinkedQueue, and BlockingQueue (used for producer/consumer designs). A ConcurrentHashMap is also included allowing for concurrent reads and writes, with most reads occurring without any locks.

  • TimeUnits class for nanosecond granularity timing, assuming your OS supports nanosecond level timing.

  • Read-Write locks for allowing multiple-reader and single-writer access to shared data structures, probably the most useful tool for replacing synchronized blocks.

This is only just a small sampling of the available features in util.concurrent, it's available for download at:

P2P For the Enterprise: A Project JXTA and J2EE Connector Architecture Solution

The Emergence of SOAP and Web services have primarily been driven by the dream of interoperability. Other peer-to-peer approaches such as Jini have been stifled because of its lack of language interoperability, but JXTA shows a lot of promise and progress since its introduction in 2001.

JXTA has the potential to extend the Enterprise Information System (EIS) beyond the walls of the enterprise, by creating a fault-tolerant P2P EIS.

If Java implies platform independence, and XML implies language independence through interoperability, then JXTA may represent network independence. JXTA is effectively a way of layering a virtual network on top of the physical one.

This session gave an overview of the J2EE Connector Architecture, JXTA, and an example of wrapping JXTA with a JCA adapter to allow JXTA peer networks to be used within an EIS, hiding corporate developers from the details of JXTA.

URL Tokenization: A Cornerstone for Managing Distribution Content Services

Managing security among collaborating Web services is an increasingly common problem. This session discussed a lightweight, low-bandwidth approach to ensuring security among 3 parties using single-use tokens.

But first -- a rant about the JavaOne speaker uniforms! The speaker joked that he could go bowling on the IBM (yep, he said IBM) bowling team, or perhaps he could change your oil after the session.

Back to URL tokenization.

The scenario is this:

Party A (a content vendor)
Party B (a client / web browser)
Party C (a content creator)

The goal is that Party A wants to grant Party B access to content held by Party C.

One way to solve this problem would be to use three-way SSL. Unfortunately, this requires a fair amount of bandwidth, and Party A must potentially maintain a relationship with large numbers of Party B's and C's. Creating and distributing secrets to all of these other parties is expensive, as ensuring identity of the other parties requires 2-way SSL. 2-Way SSL is not available on all devices, as this would require a Certificate Authority-signed certificate at each party, including the web browser!

Another approach is "federated identity", such as that promoted by Project Liberty.

The URL Tokenization approach is as follows:

Step 0: At time 0, Party A sends C a cryptographic seed
Step 1: Party B requests access to content held by Party C
Step 2: Party A generates a token and returns a tokenized URL, i.e. becomes[new token]
Step 3: Party B uses the tokenized URL to access the content at C.

Because each request to C requires a new token, it would require a trip to A, thus the tradeoff in this approach is the "ping-pong" HTTP redirect between A and C.

This approach is relatively transparent to the client device, secure even if the client only supports 1-way SSL.

Case Study of a high volume account servicing app

In one of the final talks of Java One, a guy from Capital One financial, the 6th largest credit card issuer in US, gave a detailed look at the infrastructure of their J2EE credit card services application.

The requirements included logout/login, pay bills, register accounts, service acccounts, view statements, etc. The system had to scale to 80 million transactions per month, and much of it's data from Unisys/gtandem mainframes, for which they needed to use existing C/C++ middleware.

The topology looked pretty standard: load balancer "round robining" to a cluster of iplanet web servers that use the Weblogic plugin to send Java requests to Weblogic servers running on two separate Sun 5600's, all backed by an Oracle DB on another machine.

They took a patterns-centric, layered approach. There were separate layers for the front end (implemented with Struts), a dynamic-proxy decoupling layer, a session bean layer, DAO layer, and an integration layer that sat in between the DAO's and the legacy systems they needed to interact with.

They used dynamic proxies to decouple presentation logic from business logic because they didn't want the presentation layer developers to know anything about ejb. They used dynamic proxies with an ejb invocation handler and a stub invocation handler (the stubs are used to give back dummy data which is useful for testing). They used a properties file to determine which handler to use at deployment time. Dynamic proxies are cached in a proxy factory, much like the EJB Home Factory pattern.

Also located within the dynamic proxy layer was a data cache. To speed performance, they implemented a cache for all data that is considered 'cross user'. The cache is refreshed every 10 minutes, and each 'type' of cached data is updated regularly using weblogic.timer services. They used 2 caches that rotate every 10 minutes, so when updates to one cache are being done, the other cache is serving requests. The two caches swap function every 10 minutes.

Another interesting tidbit is that during testing they used stubbed EJB's as well, so that they could do testing in isolation from the integration stuff, and so that they could test their ejb deployment to see if it works.

On the data layer things got more complicated. Legacy data was accessed via old mainframes accessible via C/C++. They implemented their own data managers and custom connection pools to manage this communication. Application specific data was mapped to Oracle via their own O/R mapping code.

An interesting design decision was made for the legacy integration layer. They had to talk to c/c++ middleware inorder to communicate with the mainframes. So, rather then import c/c++ dlls into the appserver, they chose to suffer the hit of rmi to remote JVM's that import these dlls. Importing dlls could bring the whole system down. However, they were willing to 'live down services', but not a 'down system'. So they paid the performance price of rmi to get recoverability, and they said that there were many times that they had to 'cycle' the remote JVM's.

Operationally, they did extensive testing, including unit, functional, regression, etc. They encouraged an environment of perfection and refactoring. They had a public 'refactoring list' that people could add to for discussion. Also, they didn't trust developers to write their own unit tests becaause they think they'd use the 'happy path' when testing their own code; instead, having others write tests on the code meant that the tests would be harder.

Altogether a great and educational talk...

Web Service Versioning and Deprecation

Another one of the final sessions at JavaOne was on Web Service Versioning and Deprecation.

The speaker joked over a theory about why he was stuck at the end: "If you take out the white pages of your phone book, and look in the business section, you'll probably notice a business or two beginning with the name Aardvark, probably a lot more so than "Zebra" or "Yak". Aardvark plumbing, Aardvark cleaners, etc. The idea is that in the yellow pages, you'll be the first name on the list! So, I got stuck at the end due to lexical sorting... 'Web Service Versioning'. Next year if you see a session called "Aardvark UDDI and Web Service Reuse', you'll know it's me! "

One of the keys to Web services in the enterprise is to manage their lifecycle and to manage the dependencies with their clients. The problem with Web services is that you don't always know who is using your web service, and you don't usually have complete control over the clients of your service. Developers must evolve their clients at different times from the implementation.

Unfortunately there is no standard way of providing versioning or deprecation information with Web services.

The speaker proposes a solution that embeds the version information as an XML structure, which can easily be represented as a Version JavaBean in JAX-RPC that contains a Major, Minor, Patch property. Deprecation information can be stored as free-form identifiers in a UDDI entry's tModel instance.

The key to allow for seamless usage of new versions, and migration of Web services to new physical locations is the dynamic binding allowed by leveraging UDDI for all of your Web service initializations. A web service client should look up the Web service endpoint through UDDI and cache the endpoint for future invocations.

Dig Deeper on Java EE development and enterprise Java platforms

Start the conversation

Send me notifications when other members comment.

Please create a username to comment.