Java Development News:

RMI/IIOP, nice idea but the reality is turning out to be different

By Billy Newport

01 Jan 2000 | TheServerSide.com

Sun has being pushing RMI/IIOP as the standard protocol for J2EE servers. When they started doing this, it looked like a good idea. Corba was the then established distributed object infrastructure and IIOP looked like a good way to get interoperability.

How-ever, the reality today is that this isn't the success that it seemed when we started down this path. The intent of this article is to point out the weaknesses of the RMI/IIOP idea in its role as the agent for interoperability for J2EE.

Security.

There are no security standards for interoperability between ORBs. This basically guarantees that a client ORB will not be able to authenticate against a server ORB unless they are from the same vendor.

IIOP/SSL a way out for security?

One option that seems to offer hope is IIOP/SSL. Here the ORBs could delegate the authentication to SSL. SSL can use certificate based authentication. This is standardized. If both ORBs were written to use IIOP/SSL then the server could get the client credentials from SSL.

This appears to work when we look at the simplest scenario, a client and a server but it quickly starts to break down once things get a little more complex. Suppose WebLogic supported IIOP/SSL. A WebLogic client establishes an IIOP/SSL connection with its WebLogic server. It uses certificates to authenticate against the server. Looks fine. Now, suppose a WebSphere server (that also supports IIOP/SSL) hosts a server that the WebLogic server needs. Lets also suppose that we want this to be done as the clients identity. This is not possible using SSL. Why? Because we need the client private certificate to create the connection between WebLogic and WebSphere. The private key exists only in the client. It is not transmitted to the server. It's simply not possible for the WebLogic server to create an SSL connection to WebSphere using the clients identity.

This means that while authentication is possible with SSL, delegation/impersonation is not across SSL connections. We could imagine a scheme where the WebLogic server uses a trusted identity to establish the connection with WebSphere and then transmits the client user id to WebSphere. WebSphere then uses the client id. But, we're not using SSL anymore here so this would require co-operation between the vendors and this is unlikely to happen.

Certificates, a possible way out of the security mess.

Certificate looks like a reasonable choice for finding a way out of this mess. A client could initiate its connection to the server by transmitting a digitally signed certificate signed by a certificate authority (CA) that our application server trusts. The application server verifies the signature on the token and then authenticates the client locally. If the server needs to make a call to another vendors server then we transmit the signed token to the other server. The other server (who also trusts our CA) can then verify the signature on the token and again authenticate the request as one from the client.

The problems here are that Sun would need to standardize the token and extend RMI/IIOP to accommodate it. The vendors would then need to implement it, of course. Problems with this approach would include certificate revocation lists (CRL) which are not standardized currently. CRLs address a problem with certificates. When a CA issues a certificate, it has a valid until date. The certificate can be checked very quickly using this expiration date to see if it is still valid according to the signer. The problem is when suppose we fire the employee. This information "he was fired" is not in the certificate so even after he is fired, the servers still accept the certificate as valid and the certificate does not contain this information. Remember a certificate is a constant thing, it never changes. A CRL service allows some one to check if a certificate is STILL valid. When some one is fired, we can place the certificates unique name (DN) in the CRL so that future authentication requests will be rejected by any application servers. The main problem with all this is the lack of accepted standards that would allow application server vendors to implement this.

Transactions.

The OMG defined a standard called OTS. This standard allows a transaction to be distributed across two processes. It helps implement 2 phase commit. OTS is a real specification. But, I think it's fair to say that OTS interoperability between different vendors has not been achieved. The only ORBs that I'm aware of that can do this are the WebSphere Advanced and WebSphere Enterprise ORBs. These are two different ORBs (one Java and the other native) but they do support interoperable OTS. ORB vendors don't seem to test their OTS implementation against other vendors ORBs. Maybe, this is something the OMG should start encouraging their members to do. The early days of Ethernet saw interoperability shows where lots of vendors came and demonstrated interoperability between different chip sets. It would be a good thing for something similar to happen at the OMG.

It's important to be clear here. I'm not saying that OTS is not interoperable. It absolutely can be. I'm simply saying that you shouldn't take it as a given, you'll need to do a lot of testing. It's not something that I believe is extensively tested by OTS vendors. I haven't seen statements from ORB vendors certifying that their OTS is interoperable with ORB version X from vendor Y.

Security and transactions between J2EE and legacy Corba Systems, are they really important?

Probably not as much as you'd think in reality. I think it's a fair comment that most Corba applications were not developed using a 'strong' security product. Most applications used either no security or used a simple fascade object as the object public or persistent (in the Corba sense) object registered in the naming service. This fascade let you create a session by supplying a user name and password to a factory method. If the credentials are good then you get a 'real' services object from which you can do anything you'll allowed to do. This works as far as it goes but wasn't really that secure. Given that most legacy systems are probably built like this, the lack of security in IIOP is probably not such a big deal in most applications from an implementation perspective (the design perspective may be very different of course!). Any vanilla IIOP ORB should be able to talk with such a system with no issues other than bootstrapping.

Likewise, Corba vendors have probably sold lots more non OTS ORBs than OTS orbs. This means that most Corba applications that are built don't use transactions in any case at the IIOP level. So, the fact that OTS may not interoperate is also not such a big deal.

Other points with legacy Corba server to consider are things like call backs. This was common practice with Corba services, clients registering local Corba objects with a remote server to implement callbacks. This is not so easy in the J2EE world.

Security and transactions between J2EE servers from different vendors?

Here it is more important. All J2EE servers implement security and they all implement it in incompatible ways. If security is on then you will not be able to connect to it from a different vendors ORB or application server. OTS was an optional feature before but now all the big J2EE players support it. EJB developers expect transactions to work, they code the software expecting it to work. This is different from the legacy systems where this expectation did not exist. This means that OTS not being interoperable between J2EE servers can cause serious problems because of programmers expectations. For example, a WebLogic server is executing a method on a session bean, it is using a container managed transaction. The method makes a call to a Corba server. The Corba server updates a database and returns to the WebLogic method. The WebLogic method then decides to rollback the transaction. But, the Corba server because OTS was not working is not rolled back, it wasn't involved at all in the WebLogic transaction. It used an independant transaction to update the database. The developer may know know this happens in which case you've got a bug or at best, he somehow can to undo the action performed by the Corba server to make the system consistent again.

Security and transactions between J2EE servers and Corba clients?

Here security will probably kill you. Once you switch it on then unless you're lucky enough to have an ORB that is security compatible with the application servers ORB then you're out of luck. Even when work arounds are available (WebLogic can assign a fixed identity to 'anonymous' Corba clients) they may not be suitable for your application.

As for transactions, you'll probably need to use container managed transactions for all beans/methods used by the Corba client as the Corba client will not be able to start a transaction. This basically makes it impossible to make calls to beans marked as TX_MANDATORY. Only TX_NOT_SUPPORTED, TX_REQUIRED and TX_REQUIRES_NEW would be possible.

But, the security problem will probably be your biggest problem. It's also worth remembering that you will lose clustering support (load balancing and fault tolerance). This is normally only available using the J2EE vendor supplied Java client stubs.

Clustering.

Clustering is implemented today in every application server that I know of using smart client stubs. The stubs are responsible for automatically reconnecting home interface references and stateless session beans when a fail-over occurs. The stubs are also responsible for load balancing. The stubs walk the requests over the available pool of servers in the cluster.

Obviously, for this to work, you need to use the stubs generator than comes with your application server. This tool only generates Java stubs and only works with the ORB that your application server uses. This basically means that unless you use

  • your vendors ORB
  • your vendors stub generators.
  • you're using Java.

then you can kiss clustering AKA load balancing and fault tolerance good-bye. This also applies to RMI-JRMP. Its the stubs that implement the clustering features and if a vendor implements this using RMI/JRMP then you need their 'version' of RMI also or you lose the clustering support.

J2EE functionality migrating to the ORB.

Vendors seem to be pushing functionality in to the ORB. This functionality obviously includes transaction, security and clustering support. This basically guarantees that a vendor will need to develop a custom ORB to support its application server. As things stand RMI/IIOP uses a JNI native code library. Each vendor may require a 'special' JNI library. This basically guarantees breaking the write once, run anywhere philosophy of Sun. You'll only be able to run your client on a JDK the vendor supports the JNI code on, not any certified JVM. This is currently a problem with WebSphere and you can expect the same from other vendors servers in the near future when they start needing an enhanced ORB due to these requirements.

If you need to communicate from one vendors server to another vendors server then mixing both ORBs in the server (a single VM) may be a problem unless vendors and Sun start to address this problem now and work out the details. Even if you can mix them then you'll definitely will not be able to use security and whether you get transactional support with OTS is also in doubt.

Interoperability between J2EE and legacy or non Java systems.

Given the above problems and given that most J2EE servers are built on custom, not commercially available separately ORBs (Inprise and Iona are the exceptions here) that any sort of real interoperability vanishes as soon as you use security or transactions. The legacy systems may not even be using the more recent Corba V2.3 ORB which means you'll be severely restricted on the bean remote interfaces in terms of parameter types etc (no Java objects or arrays for example).

If you have a lot of legacy Corba systems to plug in then I'd go with an application server from the Corba vendor you used before. You'll likely encounter less problems or at least only have a single vendor to chase if problems arise.

I'd argue that messaging is actually a 'better' (if there is such a thing) way to integrate legacy systems. Products like IBM MQ Series or BEA MessageQ run on most platforms and have bindings for most languages. You'll still need to come up with a security mechanism but the transactional side of the problem should be solved. WebSphere has supported 2PC for some time now and WebLogic 6.0 is now supposed to support it also. Mature messaging products have supported 2PC using the XA standard. The messaging vendors are now adding JTS support for the Java bindings. This means we can have messaging and database updates in a single transaction.

Interoperability between J2EE vendors.

So, if we use RMI/IIOP, we lose clustering and security. We may lose transactional support depending on whether the OTS implementations are interoperable. The question has got to be asked when I lose these 2 or 3 things, especially security, can it really be called interoperable anymore? Sun is pushing for interoperability in EJB 2.0. But, it's difficult to see how they can achieve this in any 'real' sense unless these issues are addressed some-how.

As it stands, if RMI/IIOP is the future of J2EE interoperability then the future looks bleak right now. Until Sun (it owns the J2EE standards process) addresses just how exactly RMI/IIOP is going to be standardized to the point where interoperability can be assured then any promises of interoperability will be based on very unrealistic scenarios (example, no security!).

Again, I'd argue than JMS or even SOAP offers a more realistic approach for interoperability between J2EE servers from different vendors.

JSRs that should help.

There is currently a JSR in progress that should address the security interoperability concerns. You can see it by clicking here. If it gets approved then we just have to wait for J2EE vendors to support it but this may take a while.

Conclusion.

I'd argue that RMI is actually a better protocol than IIOP for Java clients. If Sun wanted interoperability with existing Corba servers then making sure that using the Java version of the legacy Corba ORB works within a J2EE server would have been sufficient.

You may argue that RMI/IIOP allows EJB to be consumed in the forth coming Corba component model when products using that arrive but I'd argue it doesn't. J2EE is basically about a scalable runtime infrastructure for hosting Java components. These components are hosted in a container. How the container communicates with remote clients is not a concern when writing EJB server components. Sun could have stuck with RMI and then later when the CCM arrives, we can simply add an RMI/IIOP protocol adapter to the container. It might be an idea for Sun to enforce this distinction between Container and protocol using a standard interface so that a market for protocol components could develop. Third parties could develop SOAP adapters, IIOP adapters, file based adapters, socket based adapters etc. This would also force Sun to come up with a means for a security context to be passed from the protocol adapter to the container. This would probably also allow a standard pluggable security mechanism to be built into J2EE, a worthy addition which may be addressed by the previously mentioned JSR.

This idea of a separate protocol component is actually implemented in the J2EE specifications. Message beans basically implement this pattern. The component developer writes a message mean using JMS as the interface with the message. Any JMS implementation (i.e. any protocol) can be plugged in to a EJB 2.0 J2EE server. You could imagine writing a JMS adapter for a variety of messaging mechanisms. I think such an implementation for the session and entity bean containers would be a useful facet to add to the EJB specification.

So, in summary. It hasn't really helped with legacy integration. If the legacy system is using security or OTS then you're probably doomed. Besides, you could have just used an off the shelf Java ORB in any case to talk to the majority of systems without needed RMI/IIOP support by your J2EE server.

Corba clients needing to connect to your J2EE server are not really helped at all either as once security is enabled then the remote ORB cannot connect as it cannot authenticate. WebLogic can attach a guest or specific identity to incoming IIOP connections. This may work for your application but it basically means every client than connects is the same from an audit and authorization perspective and this may not be acceptable for all applications. You didn't add security and ACLs to your EJB components just so that every client looks the same!

As for interoperability between J2EE servers, again its just hype. The lack of security and interoperable OTS implementations basically means that you'll run into issues here also. JMS or SOAP are probably your best bets for achieving interoperability between different J2EE products in the near and medium terms. So, at least from my perspective, it started out as a good idea but unfortunately, lack of standards and the other issues highlighted here show that the reality is slightly different.