Day three of JavaOne started off with an IBM keynote, and included the Brazilian presentation on annotations, and covered even more on JSF. To be sure, more happened on Day Three – Java One is always busy! What did you notice about Day Three?
By submitting your email address, you agree to receive emails regarding relevant topic offers from TechTarget and its partners. You can withdraw your consent at any time. Contact TechTarget at 275 Grove Street, Newton, MA.
IBM Keynote by Robert LeBlanc, General Manager of Websphere
The keynote over all seemed to be a very agitated attempt to re-declare IBM’s commitment to Java, dissuading fears that arose after IBM’s lack of participation in the last year.
The keynote started strongly with IBM showing a slide of their contributions to Java over the last 10 years, with Robert stating clearly “we’re going to continue our strong commitment to java... We are going to continue to participate and drive forward thropuh the JCP….we invest over 1B into our websphere product line, that is the core brand of our java investment…We are going to continue helping java become the most pervasive platform it is today.
Robert then went into talking about Websphere 6 features, such as pluggable messaging, Discussing amount of progress made by IBM with Java, Robert pointed to a number of metrics:
- 6 products from IBM support Java, such as Linux, AIX, zOS, HP
- IBM participates in over 150 specifications and leads over 25
- Each JVM IBM delivered has increased in performance from version to version, with v1.5 being 250% faster than the first versions
- IBM’s zSeries mainframe has Java acceleration chips running within their hardware
In terms of continuing to innovate in Java, IBM pointed to their work on Service Data Objects (SDO), real-time programming, and AOP programming with AspectJ.
A demo was shown illustrating the power of IBM’s various contributions. Robocode was used with Eclipse and AspectJ to develop a set of robots. The robots were designed as objects and the features of the robots, that modify the robots, were done as aspects. A very nice illustration of AOP in action, since Aspects are supposed to be where you encapsulate behaviour that modifies objects.
After the demo, Robert discussed their commitment to openness and open source. Eclipse represented 40M of technology investment. IBM led or co-led most of the WS-specifications. IBM put their weight behind Geronimo, and IBM thinks Geronimo will accelerate the use of J2EE in the community. “The Gluecode project will become the low-end appserver in the IBM environment”. Robert also mentioned that IBM is going to put substantial resources behind Geronimo.
Moving onto SOA, Robert claimed that the industry is starting to recognize that we need interoperability at the component level, allowing companies to mix and match old and new services into new composite applications without needing to rebuild any of those services. “Integration is becoming the single biggest challenge in the enterprise”.
IBM yesterday announced a partnership with 60 companies to work together on an open architecture SOA subsystem to easily integrate their offerings. “We’re committed to working with the industry, our partners and standards bodies”. Robert went into a case study, explaining how standard life insurance developed over 300 web services internally that will allow them to build over 70 different channel applications using these services, saving over 2M pounds over 2 years.
Robert concluded looking at the mantra’s of the Java community, compatibility, community, creativity/productivity, performance, and again – the importance of community. “These are the values that are driving what we do in IBM”.
EDC: trends in Enterprise Java
EDC analyst Albion Butters began showing a graph of java usage among developers. They showed Java usage among ALL developers in NA and EMEA at 40%, and 50% in APAC.
For J2EE adoption in the enterprise, 2/3rds of enterprise developers do at least a portion of their application development in J2EE. 25% consider themselves to be heavy users.
In the enterprise sector, they found that Java is being used much more than .NET, in contrary to the general development community where .NET is being used much more than Java.
Java developers are more active in developing web services enabled apps than .NET developers.
90% of heavy java users are writing multi-threaded applications, whereas only half of non-java users are writing multi-threaded apps.
“Perhaps one of the most significant aspects is the relationship between Java and open source”. EDC found that 70-80% of java users use open source, whereas only 40% of non-java developers have. Similarly, 80% of java users have confidence in Linux for mission critical systems, whereas only 20% of non Java users trust linux.
For some time, usage of VB 6 has been in decline, with a slow migration to VB.NET, but the drop off in EMEA is much sharper whereas Java usage ramped up faster.
Albion concluded re-summarizing some of the findings already listed here, as well as the fact that familiarity with Java breeds awareness and acceptance of other alternatives to Microsoft, such as open source and Linux.
Design High-Performance, Scalable Job Processing Framework Using J2EE™
John Phenix, VP at JP Morgan managed the building of a job processing framework using a tuple space. It breaks down tasks into discrete jobs, and jobs consumption is separate from job production. They started using TSpaces from IBM, but had some licensing issues. They tried using javaspaces but since every call into javaspaces is an RMI call, performance was an issue. Other tuple spaces products at the time were not mature (4 years ago).
Their requirements were:
- independence from appservers, J2EE
- flexible configuration and deployment
- non-intrusive deployment model
- sometimes they need to do sequence processing
- put and get multiple units of jobs into a space
- and they were biased towards pojo’s, they didn’t want to be locked in
They developed framework which allows pluggable remote job schedule, eg: as stateless session beans, soap services, or local threads. They had a lot of configuration flexibility, such as what the commit rate should be, how often they should save (eg: if they batch 100 jobs at a time they might save their work every 5 jobs, incase one at the end dies).
Interestingly, they had a local job “consumer” who would decide at deployment time (via descriptors) whether jobs should be consumed in EJB, web services, RMI, or local threads.
The patterns employed were Producer-Consumer, Builder, and Decorator.
- they went for multiple tuple spaces over a single networked space
- the remote call is a bif overhead for small jobs
- for resiliency, it can be better to have multiple spaces, because it takes too long to fill one if it goes down
- they don’t guarantee uniqueness of jobs across tuple spaces – the synchronization overhead is too high
- they found it essential to have pessimistic locking around remote services, because they could never tell if a remote job was still running. For example, if the caller of the ejb goes down while a remote ejb is running, you’d lose the handle and would have no way of knowing if the job succeeded.
- they used multiple vm’s on each box, in a cluster – they scaled better than single vm per server
- keeping all of their logic in pojo’s has meant easier integration with third party products like Transformation manager from ETL solutions
They got 3 patents while developing the job processing framework, however, they are interested in open sourcing it, but are looking for a partner to help.
After covering a number of practical advice on buy vs. build, being creative about introducing new features instead of doing every cool thing, the talk went towards scalability.
Using Annotations in Large-Scale Systems: Code Automation in the "Tiger" Era
Fabianne Nardo, CTO at the Brazilian IT department who built the famous Brazilian medical imaging system on Java, presented on how they used annotations in their system.
After explaining how annotations work, and how to create a custom annotation and process it with the APT tool, Fabianne went on to describe their experiences.
At first they started with XDoclet 1 for code generation. They were able to generate about 58% of all code in the project. They captured the knowledge of their J2EE experts to build the annotations that would generate the rest of the code. The code generation was so large that it took 13 minutes to do it all! There were a total of 543,020 classes hand-written, and 744,495 generated classes.
They decided to move to annotations, to decrease the generation time. They explored new possibilities like runtime processing of annotations, or incremental generation of files like deployment descriptors, etc.
So they manually built annotation code processors that could generate EJB 3 compatible annotations, and generate code using ANT. They chose EJB 3 annotations to make it easy to migrate later. However, behind the scenes, ANT generated EJB 2.X classes, including all the home/local/remote interfaces, value objects, struts config files, struts forms, and others.
Interestingly, they also added additional features such as Interceptors and Dependency injection annotations themselves.
They implemented incremental generation of deployment descriptors. They had a ‘seed’ deployment descriptor which was combined with the processes source files to generate the EJB 2.x deployment descriptors. Interestingly, on subsequent builds, they would use the previously generated deployment descriptor and combine them only the new/changed annotated source files into new ones, saving time.
They used Velocity for the Java source file code generation. They created velocity templates for entity beans, session beans, client facades, struts forms, etc.
They presenters then showed a set of graphs illustrating code generation performance of annotations with the APT tool vs. XDoclet + javac. In most cases, APT was much faster than XDoclet when generating less than 300 EJBs. Both scaled fairly linearly, although both APT seemed to have some scalability problems at high loads in some cases. The presenters spoke to some of the apt developers from Sun while at Java One, who suggested they tune the heap/GC settings, cause they thought it was not APT’s fault.
The talk went on to describe an internal team debate they had – should they generate code or use introspection at runtime? Their conclusion was that you should generate code when:
- you can’t refactor behavior to a superclasss or a helper class
- you need or want compile time safety that can’t be achieved with generics or introspection
- you want AOP like features but your company isn’t ready for it yet – you can do this with decorators
- when you need your designs to be synchronized, such as keeping the home/remote and ejb classes in sync
They used awk scripts to transform XDoclet tags into Annotations, XDoclet to do manually do part of it, and of course, they required many manual tweaking throughout.
Interestingly, they found no big change on developer productivity since annotations are very similar to XDoclet. However, recall that their original reason for changing was performance of the code generation time. Code generation time dropped to 6 seconds, although the presenter warned that other improvements were also made during this phase.
Building the Compelling Case for JavaServer Faces
During the afternoon technical sessions, the Yuerba Buena Theatre hosted a series of talks of on JavaServer Faces of particular interest to web developers. At 12:15, developers lined up outside the door to gain entrance to "Struts to JavaServer Faces: A Programmers Guide" where Kevin Hinners, Senior Technical Analyst at FedEx Services, laid out three very effective migration strategies for projects seeking to migrate from Struts to JSF:
- Containment Strategy:
- Summary: Reuse your Struts ActionForm beans and Action classes by containing them using JSF Managed Beans. This gives the developer the freedom to rewrite the views using JSF tags, levering the JSF component model, while avoiding the need to rewrite the entire application.
- Pros: This approach has the shortest learning curve while getting project deep into JSF. By quickly embracing JSF tags, developers will be able to leverage the added functionality and power of JSF components while preserving much of the existing application logic inside the contained struts classes.
- Cons: Projects making extensive use of Struts DynaBeans, Struts Validators, or Struts Tiles will have trouble leveraging their investment using this approach.
- Bottom Line: This strategy has the shortest learning curve and quickly provides the project with the benefits of JSF.
- Rewrite Strategy:
- Summary: Rewrite the application logic to make use of JSF. This entails rewriting the Structs ActionForm beans and Action handlers as JSF managed beans (POJOs), replacing the struts tags with JSF components, and moving the struts navigation logic into the faces-config.xml
- Pros: Provides a clean-room implementation of the application using JSF without intrusion by legacy Struts
- Cons: Entails an application rewrite, which is often hard to justify if the application is already working as expected
- Bottom Line: This strategy is ideal when project needs already necessitate a rewrite/refactor of the application independent of the switch to Struts.
- Integration Strategy:
- Summary: Use the JSF-Struts Integration toolkit available from the Struts project to allow Struts and JSF components to coexist in the same application.
- Pros: This approach allows you to preserve all of your existing Struts logic and Struts presentation while making use of JSF components for new logic.
- Cons: This approach requires everyone on the project team to know JSF, Struts, and the integration library, leaving a rather steep learning curve for new project developers. Additionally, this duel-frameworks approach can leave application logic rather spread out.
- Bottom Line: Use this strategy only when the project must preserve a significant investment in custom Struts logic such as dynabeans, tiles, or Struts validators.
During the session Kevin also laid out a list of best practices to use when conducting a Struts to JSF migration, which happen to also qualify as best practices for web projects in general:
- Unit-Tests first, prior to migrating or refactoring. This will help eliminate bugs in business objects that cause hidden errors and prevent breakage while converting.
- Stub-out pages and navigation early. This will help provide visibility into how the entire application will fit together.
- Eliminate Java from JSPs.
During the Q&A session which followed, there was a great deal of questioning as to why Struts developers should consider moving to JSF. Kevin quickly listed a number of benefits:
- More advanced rendering for a much more rich user interface through the component model
- Application logic is contained in POJOs (Plain old Java Objects) through the JSF Managed Bean facility, rather than in struts specific classes.
- More powerful validation than in struts... JSF has all the facilities of struts validation, plus more.
In the end, it was clear there was quite a bit of excitement in the audience about JSF. It appears that project teams may finally be ready to break free of Struts and adopt JSF as the next-generation web framework. And with this presentation, developers are now armed with 3 very powerful migration strategies.
As if the excitement following that session wasn’t enough, at 2:45 in the very same room, JBoss’s MyFaces guru Stan Silvert presented “How to Build Killer Portlets Using JavaServer Faces Technology”. Room was again packed with a line out the door, as even more people came to attend this presentation. This shows two key things: People want to write Portlets, and they are looking to JSF to help them. And this talk showed just how powerful JSF can be in the portal environment. In presentation, Stan answered the following key questions:
- Why you should use JSF to create Portlets : JSF is the only major web framework built from the ground up with Portlets in mind. Changing a JSF application to run in portal takes very little work compared to changing a Struts app or an app written on an in-house framework.
- What a JSF technology developer needs to know about Portlets: Portlets are essentially self-contained mini-applications that run inside a region in a larger portal page. In order for portlets to co-exist as independent apps running in the same webpage, there is a special contract Portlets must follow which differs from the Servlet model, such as strict adherence to a 2-phase request servicing lifecycle. Additionally, Portlets may not redirect the browser, as this might break other Portlets on the same page.
- What a Portlet developer needs to know about JSF technology: Here, Stan presented a very effective and condensed summary of the main features of JSF, including:
- The Managed Bean Facility which lets POJOs be bound to JSF application pages
- The Validation Facility which allows development of a reusable set of validation rules to be applied to user input
- The Rich and Extensible Component Library , JSF's biggest strength which allows for development of complex components such as a calendar control or data table which can easily and quickly be embedded in a page and handle user input via the JSF contract
- Pluggable Render Kits which allow components to have multiple renderings, similar to how Swing has Pluggable Look and Feel, but here it's pluggable markup rendering for different components running on different types of devices.
- Navigation (page-flow) to decide which page to go to next
- Preserving application state across requests... the managed bean facility removes the need to constantly be stuffing state into the session manually.
- Conversion Model to facilitate the conversion of data-types to and from Strings.
- How to convert a JSF app to run as a Portlet: This was the truly impressive part of the presentation. Whereas converting a Struts application to run involves a long and drawn-out process involving pages and pages of instructions, converting a JSF app to portlets is truly simple… with just a few configuration file changes Stan was able to turn his stand-alone application into a tiny portlet running inside a much larger portal page.
- How JSF Portlet Integration Works: JSF hides the Servlet specific API, so the ExternalContext simply gets switched over to use the Portlet API underneight instead. The Faces controller portlet then supports the 2-phase lifecycle required by portlets. JSF has a "view-handler" that knows how to render URLs that support portlets. So you'd never hard-code URLs into your JSP... you want to let JSF create the URLs.
- Examples of Advanced JSF Portlets
At the end, Stan brilliantly summarized the bottom line in what is perhaps the most compelling case for JSF in Portlets yet… "If you're going to build a portlet, why not use a framework. If you're going to use a framework, why not use the framework that was built for portlets from the beginning."
Covering some other bases
Adobe showed-off LiveCycle, their workflow automation platform, at an Adobe party on Wednesday. LiveCycle is Adobe's PDF-oriented implementation of an Enterprise Service Bus. I asked them if they provide persistence and they said LiveCycle implements a persistence object process (POP) layer that implements persistence and transactions. If the process stops unexpectedly then POP keeps track of the transaction and can restart. POP is a similar paradigm to XA. LiveCycle will integrate with the Java Business Integration model.
RainingData introduced SOAR - the SOA Repository based on TigerLogic.
SOAR provides a native XML database and XQuery engine that in an Enterprise Business Bus application makes service request acceleration through caching and transformation and normalization of multiple schemas possible. Most people I spoke to about JBI said they saw a whole in that JBI does not include an XML persistence tier.
Details on SOAR are found at:
JavaOne hosted a track on Java to .NET interoperability. The opening session showed Microsoft's distributed server manager making Web Service calls to Solaris boxes to warn of equipment problems and to switchover to alternate equipment on failures. The demo was really just to show how Sun and Microsoft are "making nice" after the Java lawsuit settlement. The SOA interoperability session talked about problems in data types and description languages. For instance, WSDL definitions for a JDBC and ADO.net dataset vary and cause interoperability problems.
The Birds Of A Feather (BOF) session on Java Mobility included a timetable for the JSR 248 Mobile Service Architecture for CLDC. They said to expect Beta 2 this winter, final release in second quarter of 2006. Future plans include a SIP API (JSR 180,) Payment API (JSR 229) and others.
Satish Hemachandran, product manager at Sun for RFID, gave a BOF on RFID technology. Sun is focusing on supply chain management and identify management. Satish described the efforts of the EPCGlobal standards body to define workflows and APIs for supply chain solutions. There were lots of questions about EPCGlocal solutions integrating with JBI. Many of the answers were "coming soon."
JSR-223 JNI-based framework for calling scripts from within Java apps.
When asked what James Gosling regrets about Java he said "AWT. We did the whole of AWT in 3 weeks. I wish we didn't do the whole AWT thing."