JavaOne 2007 Day 2 and 3 Coverage


News: JavaOne 2007 Day 2 and 3 Coverage

  1. JavaOne 2007 Day 2 and 3 Coverage (9 messages)

    By Frank Cohen Day 1 Last week I met a managing director for a Tanzanian bank that makes microfinance loans. These are very small loans to entrepreneurs, frequently less than $100. The bank executive made a decision to buy a Java-based software package for $250,000 to handle server-side and client interfaces to the banking management functions. Unfortunately the bank spent another $250,000 on a Java engineer to make the system output the reports they need and integrate with existing bank IT systems. Today at JavaOne I really want to learn about new Java technology and patterns that reduce integration costs. I sought out enterprise tools and technology supporting SCA, JBI, and enterprise interoperability. Here is what I found. ActiveMatrix, the first shipping commercial SCA/JBI Tibco demonstrated ActiveMatrix 1.0. The new product is already in use at Delta Airlines. ActiveMatrix provides a message-based infrastructure and component model to let Java developers write EJB components that are plugged into its message infrastructure. The SCA architecture handles deployment and makes it easy to move a component from one server to another. I wrote a white paper introducing Java developers to the idea of service virtualization earlier this year. Virtualizing the runtime environment makes it easier to govern the applications and to recover from the inevitable operating problems. Update From Sun on JBI and SCA I sat down for a conversation with Jim McHugh, VP of Software Infrastructure Marketing at Sun, and Kevin Schmidt, Director of Product Management for SOA/Business Integration at Sun. Kevin came to Sun from SeeBeyond and did most of the talking, and Jim did most of the nodding. Kevin gave me an update on the JBI efforts at Sun. OpenESB is Sun's reference implementation of JBI. Sun announced JBI 2.0 this week and had its first meeting of the JSR 312 expert group here at the conference. JSR 312 primarily adds better deployment and manageability and makes JBI more complimentary to SCA. Kevin told me OpenESB is more than a Sun initiative in that its open-source community has participation from Imola Informatica and Bostech. attendees who attended the recent Java Symposium may remember Ross Mason, CTO for, saying Mule is unsupportive of JBI because JBI solves a subset problems for integration application development while Mule solves problems at the messaging level. In Ross' opinion, JBI attempts to standardize a solution to integration solutions, where from Mule's experience, enterprise customers have different and unique needs. Ross stated that Mule will not standardize and provide Mule as a JBI container. However, Ross did see promise in SCA, which is a way to wire services together in a uniform way. Mason said SCA is a good fit for Mule and that there are plans for Mule configuration to use SCA configuration. Mule is talking with the Tuscany folks to make this happen. From Kevin's perspective "this is not a JBI versus SCA issue. SCA is more about component metadata and JBI can be the runtime." Sun is a proponent of SCA, joined the SCA group last year, and is participating in its transition to the OASIS standards body. Kevin pointed to proponents of JBI, including IONA, Logicblaze, Tibco, and Sun with its Java Composite Application Suite (JavaCAPS.) Eric Smith, a software architect consultant to Boeing, seems to agree. Eric told me Boeing is specifically looking for JBI solutions to provide portability and avoid vendor lock-in from other ESB-like solutions. Kevin talked about Sun's efforts to provide application server, identity products, and support for Spring in the OpenESB project. Sun is looking into creating a Spring Service Engine that allows developers to write Plain Old Java Objects (POJOS) with Spring and have them integrated into a JBI container automatically. When I asked him how they will decide on this Spring integration, Kevin said "it's up to the community." Kevin also said the original developer from the JSR 208 reference implementation is working on a special compatibility test (TCK) for JBI applications. I floated the idea of Sun making a future version of EJB a JBI container. Kevin pointed me at the Java EE Service Engine. He said it enables you to interact with EJBs from JBI in a "very native way". When I told my experience with the Tanzanian bank Kevin said sympathetically, "historically Java development has not been cheap or easy. We believe we need to make development easier. Integration technologies provide visual tools in NetBeans to orchestrate services and enabled Java to do integration easily, cheaply, and efficiently." This seems to sum-up where JBI stands today. Virtualization For Service Deployment Infrastructure A few developers at JavaOne asked me about using platform virtualization technology to deploy SOA services. Virtualization platforms from Microsoft for Windows, replicate for Windows, and Sun for Solaris let you slice a server into multiple running instances of an operating system. For example, on a multi-CPU Intel Dual Core platform you run what looks like 3 copies of Windows, with the underlying virtualization platform taking care of timeshare slicing the CPUs to service the running applications. I found an emerging train-of-thought here on using virtualization platforms to deploy an SOA infrastructure to run services. Many people I met at JavaOne are being asked by their organizations to evaluate and make decisions on infrastructure and tools to host composite applications and data services. Every data center I have seen already has facilities to run Web-browser-based applications in what Chris Richardson coined as the Domain Model. The other popular models (ESB model and Web 2.0/Ajax model) deploy services that use XML data for interoperability. My research shows that using the Domain Model for XML-centric applications is not scalable, nor flexible enough for the average company. That's driving enterprises to consider the alternatives: 1) Domain Model – servlets, Web containers (Spring and EJB) model/view/controller, and relational database. 2) ESB Model – write a service and plug it into the bus, the bus handles communication (messaging) to other services on the bus 3) Virtualization Model – run as many copies of the application server as you will and let the virtualization environment worry about load balancing between the underlying CPUs and hardware. 4) Web 2.0/Ajax Model – services speak multiple protocols to multiple backend systems to deliver a rich user experience and highly functional application. In my experience it is very easy for these platforms to look equal when evaluated, if the use case is written incorrectly. For example, one would realize no difference if a test that favored the domain model architecture is used in an ESB model. This is challenging developers to find new ways to design and build tests to evaluate these models. Rohit Valia of the group at Sun told me about Sun's utility computing service. The service is a large scale datacenter available for you to run applications by the hour. You actually pay for CPU Hours using PayPal! The datacenter is stocked with hundreds of Sun servers running Solaris N with X64. Sun announced at JavaOne a plug-in for NetBeans to submit applications and data directly. You execute your application as a user and the environment is managed by the Sun Grid Engine. Sun also announced access from 24 countries (formerly US only) and announced APIs to embed into your application so you can send a job to the grid. Sun is offering the first 200 CPU hours with 10 Gbytes of storage free with each new account. The datacenter is in Nevada and Sun charges $1 per CPU Hour. DWR 2.0 for AJAX I met Joe Walker of the DWR project hanging around the Tibco booth. He recently shipped DWR 2.0 and looked pretty tired. DWR 2.0 offers three major improvements: dynamic generation of JavaScript from a Java API at runtime, support for asynchronous message transfer from server to the browser, and security features to reduce the possibility of cross-site scripting (XSS) and cross-site request forgery (CSRF.) JCP Good, Bad, and Ugly I spoke with Onno Kluyt, senior director and CTO/Labs at Sun, and director of the Java Community Process (JCP) about the good, the bad, and the ugly of the JCP. I pointed out the good in JSR 108 and how many platforms and tools support annotations. He smiled. I pointed out the bad in JSR 102 where JDOM made it past the JSR Review Ballot and the Executive Committee for SE/EE approved the ballot but JDOM never made it into Java. Onno said JDOM never achieved a formal specification. Onno told me that when you submit a JSR it is forward-looking and sometimes the market changes, the developers change, and in the end there is no shame in withdrawing a JSR. Onno said some JSRs don't work out because of the human part of a JSR. He quoted Cameron Purdy at the TSSJS Barcelona JCP panel saying he was surprised to find that the spec lead has to do real work. Onno related a JSR to leading an open source project, "not everyone flocks to you just because you are a JSR." Then I talked about ugly in the form of JBI. I watched JBI fall apart with IBM and BEA starting as members of the expert group to exit just before the draft specification. From the outside, it looked to me like Sun used its position to control the JCP process in its favor and made me question the open and impartial nature I was expecting of the JCP. Onno told me in this case Sun, BEA, and IBM were members of the JBI expert group and Sun did not push them out. This was a case where the three had competitive products and did not want to create a standard. The JCP community facilitated lots of background conversations to keep JBI moving according to Onno. He pointed to the executive committee approving JBI as evidence that the JCP is not about Sun pushing through its standard. In a self reflective moment Onno said "The market and the community is larger than any of us." For instance, Onno told me JSR 76 and 78 on RMI were voted down (with much of the work there now moved to JINI.) Onno told me over the coming year the JCP will begin to allow JSRs to be implemented outside of Java. Sun, HP and IBM and others have customers with very mixed datacenters. For instance, JSR 76 and 78 deliver RMI client and server in Java and some customers need a C, or even a Cobol, implementation. An Evening With Performance Geeks Madhu Konda and Scott Oaks led a BOF on JEE performance optimization. JEE originally meant the application server. The definition and their responsibilities have greatly changed to include AJAX, XML bindings, Web 2.0 protocols, and business integration. That sounds pretty daunting to me considering how many variations there are to test for performance. Madhu and Scott brought out the entire team: Acra does load generation and Web 2.0 on the server side, Charlie Hunt works with SE and EE team, Scott Oaks is tech lead on Glassfish performance and JAX WS performance, Venkana works on server performance, Kim focuses on XML, Venu is lead for dev2.0 on server side, and Murty is an OpenESB architect. I asked how the Sun performance team keeps track of all the XML binding compilers and parsers. For instance the BindMark commiter gave up on keeping up to date on the 21 different XML binding compilers and parsers that are out there. The Sun performance team related to the Bindmark experience and said they focus on comparing XML bindings using XMLBeans and JAXB. One BOF attendee asked "We do a lot of XSLT transformations. We discovered that (in) a single thread approach there is a specific memory leak. The leak is even discussed on the Apache site. What steps would Sun take to prevent it?" The team generally agreed that once they suspect a memory leak they use OptimizeIt to take snapshots over time and compare the snapshots to determine the existence of the leak. They also pointed out the utility in JDK 6's JHat heap analysis tool. They recommended reading Jean-Francois Arcand's Blog. When asked for performance data that compares Glassfish to other application servers the team noted that they are in the middle of Glassfish 2 development. The Spec organization will not allow them to publish data until they are in an FCS state. On the other hand, they noted that individual team members publish informal blogs that often include performance data. The team said Glassfish 2 is a performance-oriented release. Madhu noted that Glassfish 1 performance ranking using SpecJAppServer shows Glassfish 15-25% slower than the leader, Oracle, using MySQL DB. They were happy to prove that their open source application server was close to Oracle. Others on the team talked about how much time XML bindings take to transform documents into objects. For example, they said JAXB 1.1 moving to 2.1 drastically changed the initialization technique and that slows performance. The performance team offered suggestions on XML optimization. For instance, they noted that some applications use factories that initialize when the JVM loads. Instantiating these factories is very expensive and the team recommended a pooling strategy. They also noted that a lot of the parsers are not multi-threaded and are not cache safe. They said a common mistake they see is to cache a parser that is not thread safe. You want to use factories to create your own parser instance. The Sun performance team created FABAN, their own performance test tool, out of a need to do testing, modeling, and simulations. The team makes FABAN freely available and they do the maintenance on it. They chose not to use SlamD because they wanted functions that would let them multiple tests over time and compare the results. They found HP LoadRunner to be very linear in its test development technology. If you simulate 1000 users they all tend to be the same. The Sun team needed mixes and timing models that provide randomness. FABAN uses an EJB-like programming model to code tests, including a configuration file for easy tests. FABAN includes no timing code or report generation code. When asked about performance testing for AJAX applications the group was a little less certain. FABAN tests at the network protocol level without actually instantiating a browser. The Selenium project uses a live browser. That makes it difficult to stage a scalability and performance test when each server needs multiple browsers. I got together with the Selenium leads a couple of weeks ago and figured out how to run Selenium in the PushToTest Version 5 environment. I told the Sun team how I accomplished this so they could consider it for FABAN. Details are at Dynamic Scripting I hosted the Dynamic Scripting BOF at JavaOne. The event went off very well with what looked like hundreds of attendees – it could have been the large room! It was an occasion for all of the scripting language proponents to met and discuss the state-of-affairs. It was also a good chance to meet John Rose, the new spec lead for JSR 292, and a very nice guy. Download the slide presentation from my blog site. Make The UnConference An All Java Long Event My biggest criticism of JavaOne over the years is how often the conference planners seem to mismatch what I am interested in with the actual program. There are other problems like scheduling snafoos – for instance, planning all of the Spring sessions to be on Tuesday at 1:00 pm in different halls. I wish JavaOne would adopt the UnConference technique and set aside one track on each day just for attendee nominated presentations and discussions. The RedMonk sponsored UnConference during the CommunityOne pre-conference last Monday had the content and attendees I value most and made the JavaOne experience richer for me. -Frank

    Threaded Messages (9)

  2. Jython 2.2 beta 2 ships[ Go to top ]

    I suppose the Jython folks left last night's JavaOne Dynamic Scripting BOF and released beta 2. Amazing. Here's the announcement that I got tonight: -- I'm happy to announce that Jython 2.2-beta2 is available for download: See for installation instructions. This is the second and final beta release towards the 2.2 version of Jython. It includes fixes for more than 30 bugs found since the first beta and the completion of Jython's support for new-style classes. Enjoy! Charlie
  3. FABAN?[ Go to top ]

    What is the download link to FABAN?
  4. Link[ Go to top ]
  5. Download link[ Go to top ]
  6. Imola Informatica Link[ Go to top ]

    You can read about Imola Informatica here.
  7. Oti Humbel is a Jython expert. He led a session today that presents a portion of his amazing knowledge and humor on the Jython danguage. His slides contain a lot of information and are found at -Frank Cohen
  8. Oh No![ Go to top ]

    Onno said some JSRs don't work out because of the human part of a JSR. He quoted Cameron Purdy at the TSSJS Barcelona JCP panel saying he was surprised to find that the spec lead has to do real work.
    Boy oh boy, I sure hope that the "tongue in cheek" nature of that quote is self-evident, or I'm going to look like an even bigger schmuck than I actually am .. Peace, Cameron Purdy Tangosol Coherence: The Java Data Grid
  9. JCP and defensive leadership[ Go to top ]

    Good point Cameron. My question to Onno put him on the defensive and I felt uncomfortable about that. (I'm no 60 Minutes reporter.) Onno did his best to answer my question fully without sounding defensive. Indeed, Onno was trying to put a humorous spin on his answer by quoting you. The thing I took away from the conversation with Onno is an appreciation for what it must be like to lead a standards group like the JCP and walk on the razors edge between being a shill for Sun and a schmuck for a non-Sun participant. Onno is diplomatic in that way, and... you're not that big of a schmuck. :-) -Frank
  10. Some Faban clarifications[ Go to top ]

    Just a gentle correction. It won't be accurate to say Faban includes no timing code or report generation code. More accurately - benchmarks or stress tests written using the Faban driver framework won't need to address timing and report generation in their code. Stat collection and reporting are all implicit/automated and there is quite an elaborate set of stats being collected and reported. APIs are available for benchmark developers to collect and provide additional stats if the default set do not meet their requirements. As for Ajax and the Selenium integration, this is something to look into. The biggest concern is the footprint and CPU consumption. Simulations using n number of Firefox instances use a lot more resources when compared to n number of client threads simulating the traffic. For example, the cost to put together a rig with 10,000 Firefox instances would be quite prohibitive. We need to investigate this option a little more but are certainly open to it. If we can capture the Ajax traffic using Firefox Live HTTP Headers and there are limited permutations or a predetermined set of the Ajax requests for the system under test (and usually these are limited), it is certainly more effective to create that traffic directly to test the server, even if the Selenium integration is available. Logically adapting the Ajax request to the particular session/user is already supported and has been done before.