Discussions

News: RMI/IIOP nice idea but the reality is turning out differently

  1. Sun has been pushing RMI/IIOP as the standard protocol for J2EE servers. When they started doing this, it looked like a good idea. Corba was the then established distributed object infrastructure and IIOP looked like a good way to get interoperability. However, the reality today is that this isn't the success that it seemed when we started down this path. The intent of this article is to point out the weaknesses of the RMI/IIOP idea in its role as the agent for interoperability for J2EE.

    Read Article Here

    Threaded Messages (30)

  2. This is for Billy.

    "I'd argue that RMI is actually a better protocol than IIOP for Java clients."
    Well, where are your arguments ?

    On the side note, I notice that the articles don't support feedback.
    Nice feature, you avoid criticism :)
  3. I 'argued' this for the following reasons.

    RMI can run on almost any JDK. RMI/IIOP can't. It's an optional component in 1.2, mandatory in 1.3. RMI runs on almost any JRE.

    I've seen projects using RMI/IIOP and they got caught at the end when they discovered that they couldn't run the client on some of the client JREs that they needed to support even though a JVM was available. They resorted to RMI and/or HTTP for their main protocol in the end.

    Why did we move from RMI to RMI/IIOP in the J2EE world? It could be said that this was done for legacy interoperability reasons and in this regard, we've had limited success. The arguments for this would be security problems and limitations on the interfaces of the remote interfaces depending on whether you had a 2.3 level ORB on your client or not. It may get better in the future but right now it's difficult.

    Even with RMI/IIOP on the J2EE server, if security is on then remote Corba ORBs cannot talk to us unless the client is using the same ORB as the server (Iona and Inprise users may be able to get this to work as the server ORB is also available as a standalone product).

    I'm not knocking Corba, I want to make it clear, this wasn't/isn't about Corba versus J2EE. I'm only talking about the use of RMI/IIOP as a means for interoperability in the J2EE world. From looking at your previous posts (and excuse me if you feel I'm putting words in to your mouth), I understand and to a lesser extent than your-self, maybe, support your views that J2EE is not appropriate for some applications and that Corba may be a better fit for some. The only intention is to make people aware of the issues regarding RMI/IIOP in the context of interoperability between J2EE servers from different vendors and for interoperability between legacy Corba applications and J2EE servers.

    Even with the ongoing JSRs regarding security interoperability for J2EE servers this won't help with the interoperability with legacy Corba clients/servers as unless the Corba vendors also adopt the findings of the JSR then whilst it may help between J2EE servers from different vendors, it may not help with the legacy situation.

    So, thats my basis for the statement. I hope it clarifies it.

    Besides the one statement you outlined though, what about the rest of it? Does it make sense or do you also think I'm mistaken there?

    BTW, I agree that not having comments on these pieces is a problem, I also miss the feedback, good or bad. You can
    always email me if you want to (and people have) :-)
  4. Billy,

    The rest of the article is excelent.

    I only highlighted the that issue because you didn't put the arguments in there.

    There are two things however that I'm a little bit unsure about.

    1. IIOP (GIOP as a matter of fact) is a better protocol than the existing JRMP.
    I think you don't argue with this, IIOP does everything does + a little extra. The problem with the support in clients, I don't think is an issue that can't be overcome.
    As a matter of fact you have two options:
       a. if you distribute the stubs automatically, you can put an rmi/iiop implementation in the same codebase.
       b. if you distribute manually by simply copying it, you can copy the rmi-iiop pacvkage as well.

    2. I agree that the current RMI-IIOP implementation doesn't currently help you in any way reach the interoperability goals you described.
    However I'm not sure that the goals are the right one and worth following. I'll come back later with details
  5. It's a little more complex with RMI/IIOP as it usually depends on the ioser JNI library. I've seen the problem where a client was on FreeBSD and the 1.2 JRE for it didn't have this so copying the Java stubs doesn't help, you need native code to use the RMI/IIOP packages.
  6. Let's detail on some issues .

    1) Stub portability.
    If the stub generated by a Java ORB can't run on Java platform than I'll say that ORB is just lousy.
    Even if it's Sun's RMI-IIOP.
    I don't think IIOP is to blame here, rather it was an implementation decision to use native code.


    2) Transaction propagation across App Servers (across different ORBs).
    I think this is an unreasonable ambitious goal.
    As far as I know the 2PC doesn't work across Transaction Managers as far as I know.
    Although the theory supports this scenario, I'm not aware of transaction managers who implement such a feature, and even if they are, the performance penalty should be heavy.

    As a matter of fact IMHO, the ubiquity of distributed transactions that was made possible with Microsoft's MTS and
    Sun's EJB made it easy for developers to forget one basic thing:
    - Distributed Transactions are a heavy burden and they should be generally avoided.

    3) Security.
      I think that authentication and authorization have larger issues than can be solved by the current infrastructure .
     With current technologies I'm affraid we have to support these issues at a higher level, rather than rely on the infrastructure.
      Even if let's say the RPC mechanism will propagate some kind of security context, with certificates and all that's needed, this still is a long shot from solving the security problems.

    In general I'd say that CORBA is better equipped than RMI to handle all these issues, but no infrastructure mechanism can do miracles.

    So I agree with your conclusion that the reality is not that good, but the introduction of IIOP is a good thing, and on the other hand there are things that it would be too much to ask for the infrastructure to solve them.

    And your proposed solution ?!
    It wasn't quite clear. Going back to RMI/JRMP won't solve a thing that you mentioned.
    SOAP and JMS , well , maybe , but I'd like to see some arguments, because my thinking goes "maybe not".

    Cheers,
    Costin
  7. Hi Costin,

    I posted the article in the news forum so that readers can have a discussion about the article.

    Ed Saikali
  8. The use of certificates appears to be vulnerable to a man in the middle attack between the application servers. This is because there is no "shared secret" between the application servers. I am assuming this is done over an encrypted channel and the Weblogic server and the client have a shared secret otherwise there is another man in the middle attack between the client and Weblogic.

    There also may be attacks that can be used by the Weblogic server (or an impersonator) against the WebSphere server as it does not appear that WebSphere would have any method for determining that the client is who Weblogic claims and that he has made the request Weblogic claims. There would need to be a protocol for encapsulating the request as well as the time of the request in the token which the client encrypts. Even with these steps I suspect there are attacks I am missing.

    These types of attacks are why SSL does not allow delegation/impersonation.
  9. IIOP/SSL isn't really an option anyway due to lack of support in WebSphere, I'm not sure whether WebLogic supports it. I remember it did at least with T3.

    HTTP/SSL with SOAP or something similar is probably the only way of doing SSL. The best you can do is authenticate both ends of the SSL pipe when we make it, WebLogic verifies the WAS certificate and WAS does likewise with WebLogic. We add each ends certificates to the other ends trusted certificate ring or issue both certificates using a trusted CA. We handle certificate revokation lists using a common LDAP directory that is used to verify the other parties DN exists. WAS operates like this, again, not sure about WebLogic.

    If someone can get both certificates with the private keys then we're still vulnerable to a man in the middle attack but if the certificates or passwords are compromised, you're pretty much hosed in any case...

    Another option may be JMS and the use of a digital signature on the message that is used to authenticate the incoming message. The problem here is that J2EE currently doesn't give a MessageBean a method for saying, ok, I'm now this person for all future bean calls for this message bean message so there is no standard way of authenticating a message bean using a message embedded authentication mechanism.

    I think the only way we're get impersonation support is if the new JSR on this gets in and vendors start implementing GSSAPI type layers in the product. We'll end up using Kerberos probably. This probably means any APIs to handle impersonation may simply be the GSSAPI calls.

  10. Billy,

    Yes there has to be a trust relationship between the weblogic and the websphere servers. This would allow weblogic to build a secure connection with websphere (under it's own name, not the client's) and then act on behalf of it's client.

    Things could get more complicated if we were worried about weblogic misrepresenting a client request. I don't know of any way to prevent this except to build checks into the application. The cryptography behind it wouldn't be too difficult but it would be application code.
  11. True but we're talking about a fundamental weakness in almost every security/single sign on solution. There is no standard for impersonations etc.

    This is built in to J2EE from the ground up. Look at connection pooling, same problem. The app server uses a generic credential to get a database connection. A bug in the security logic in the app server reveals database data.

    A real solution would offer end to end credential delegation and no mapping. Login to web service, passed to app server, passed to other appservers/legacy systems, passed to database. But, we're a long way from something like this...

    DCE and Kerberos are probably the only real 'standard' choices and these don't seem to be making inroads in the J2EE market.
  12. Billy,

    I think you owe us a response.
    Some of us challenged you conclusion that the RMI/JRMP combination was a better deal.
    Do you still stand by this conclusion ?

    Certainly a hybrid as RMI/IIOP is not going to solve many problems, but maybe could be a step in the right direction.

    I subscribe to your suggestion that EJBs should be as "protocol transparent" as possible.

    And on the other hand, little could be done to solve existing problems.
    My opinion is that we shouldn't expect the interoperability problems to be solved by the framework.
    Theoretically it could be done, also theoretically we might encounter some inefficiency if we move some aspects at a lower level.

    But from a practical point of view you'll have to admit that almost nothing can be fixed at this late stage in EJB 2.0 draft, and it's very hard to see how Sun will want to drastically change some critical aspects in the next versions.

    But on the other hand you can develop distributed user level APIs (as opposed to framework level) that can solve the problems with some extra work.
  13. Costin,
    you will get surprised to see the changes of the EJB2 spec
    in the final release proposed by the EJB group!

    -Gianni
  14. Gianni,

    The question is "should I be surprised for the worse or for the better" ?

    From your tone I suspect that you think it should be for the better, but many people may have different views on the same subject.

    And you know me and how I am very hard to be pleased.
    But this a totally different discusiion and there is a very active thread related to 'What's wrong with EJB spec'.
    Maybe you can contribute something there.
  15. I did respond, didn't I? I gave my opinion, I gave more information in the leader piece.

    I think you'd be surprised what could change in the time remaining for the EJB 2.0 spec.

    RMI/IIOP is a step on the right direction but the fact that it isn't 100% JAva (implementation problem) and that security wasn't interoperable (may be solved at least between J2EE servers if the JSR succeeds and everybody uses GSSAPI).

    But, basically it appears that only Java J2EE clients will ever be first class clients to J2EE servers even using IIOP(GSSAPI doesn't apply to Corba ORBs). So, if this is the case....
  16. GSSAPI doesnt apply to Corba ORBs.

    What I meant of course, was that the JSR etc only applies to J2EE vendors. If the Corba vendors also support GSSAPI type security then even better.

  17. Jonathan,
    <quote>
    I am assuming this is done over an encrypted channel and the Weblogic server and the client have a shared secret otherwise there is another man in the middle attack between the client and Weblogic.
    </quote>
    It is public key encryption Why you want "extra"
    encrypted channel??
    I think your concerns are not necessary...
  18. qing,

    To clarify, I specified that between the Weblogic server and the client there must be both a shared secret (which can be created through the use of public key cryptography) as well as an encrypted channel. While it would be very uncommon to have a shared secret and use a clear channel, both are needed in order to avoid further attacks.
  19. Billy and Costin,
    You cannot compare JMRP Vs IIOP.
    There are completely 2 different protocols for different purpose.
    They decided to adopt RMI/IIOP for the following simple reasons:

    RMI is pretty limited, and you cannot transmit the transaction and security context over JRMP.
    That's why they decided to adopt IIOP.
    However, they had to add a feature to CORBA, Object by value in order to support RMI features

    JMRP is Java based while IIOP is cross language.
    That's why, as you said Billy, Websphere advanced
    (Orb written in Java) and Component Broker (aka WebSPhere Enterprise) orb written in C++ can communicate each other.
    However, in that case, you have another problem, interlanguage inprocess calls. CB supports that

    The interoperability at the transaction service has been almost archieved. There is still something to do and test, but we are closed.
    For the security, you right, it is a problem, until they do not define a Security Service with related standarized API.

    I do not you guys, but I do not like a completely stardarized World.
    Sun is screwing Java by adding every minutes API for this and that. Tomorrow maybe API for going to the bathroom.
    I am one of them who has embraced Java long back in the 1995, but I am not happy how Sun is doing..
    Having a standarized API is good but define all the APIs, for all the possible programming models of all the possible kind of applications using all possible devices
    .....IS TOO MUCH!
    What is the difference between Sun and Microsoft ?
    Sun is using a programming language while Microsoft is using an operating system.
     

     -Gianni
  20. Hi Gianni,

    Beside the fact that I was criticising JRMP, you could've given us credit that we knew what's CORBA and what's RMI ;)

    And yes, one can compare JRMP and IIOP :)
    It's not quite like Apples and Oranges, it's more like Pepsi and Coca-Cola.
    You're trying to say that IIOP is absolutely better than JRMP, and I would generally agree, but Billy's points in the article are very sound, too.
    And Jim Waldo in other web forum made some extra good points why they stayed with JINI on the RMI side.
    So, overall, it's not quite as clear as you're trying to say.

    Sure, a lot of people dissagree with some of the things that Sun is doing but that's a totally different discussion.

    I would rather be interested if you could tell us in details what do you mean by
    "The interoperability at the transaction service has been almost been achieved. ".
    Even a hyperlinked referrence would help, because I do have some doubts.
     
    Cheers,
    Costin

  21. I don't want to divert the topic of this discussion, but Gianni I couldn't agree more
    "IS TOO MUCH! What is the difference between Sun and Microsoft ? Sun is using a programming language while Microsoft is using an operating system."

    And of course u can't come up with all the API's for every application.
    If you guys want I would like to Spawn another hot topic. Why use "SOAP" instead of CORBA? Just because it is a new thing in the market or there are some good reasons too..
    Can some one help me here????
  22. I don't want to divert the topic of this discussion, but Gianni I couldn't agree more
    "IS TOO MUCH! What is the difference between Sun and Microsoft ? Sun is using a programming language while Microsoft is using an operating system."

    And of course u can't come up with all the API's for every application.
    If you guys want I would like to Spawn another hot topic. Why use "SOAP" instead of CORBA? Just because it is a new thing in the market or there are some good reasons too..
    Can some one help me here????
  23. Rashid,
    I do not think SOAP cannot be used instead of CORBA.
    SOAP is a marketing bubble as several XML standards
    For Sun and API topic , let's talk in another discussion.
    let me know

    -Gianni
     
  24. Mispelling , I do not think SOAP can be used...
  25. Gianni I already spawn a new thread in enterprise events forum. I would appreciate your 2cents and all of you who wanna share their thoughts on SOAP vs IIOP, JRMP or RPC.

    Thanks,
    Rashid.
  26. Costin:
    "As far as I know the 2PC doesn't work across Transaction Managers ."

    No, it does work, even across different systems (CORBA-Legacy)
    like OTM which uses IIOP protocol, and IBM CICS (one of the most used OLTP) which uses LU 6.2 protocol.
    Because both protocols inherit from DTP,
    with some work, you can coordinate a Tx across diff systems.
    Pratically, it is not a big deal.
    The others TMs are seen as RMs by the Coordinator.
    That's it.

    People think that 2PC is something they will never need.
    Wrong. I agree it is heavy but sometimes you need 2PC
    Any application which wants to reuse the legacy layer has
    to deal with it.
    Any TX with more than one RM has to.
    Not all the applications are so simple like my business object and a remote database...


    By the way, how are you doing Costin ?
    I see you are pretty active in this forum ;-)
    -Gianni



     
  27. Transaction Propagation[ Go to top ]

    I mentioned that theoretically it is possible.

    Practically, as you you just said you set the subordinated TMs to act as RMs.
    First of all, I'm not sure to what extent current ORBs and OTS implementations support such a feature and how .
    That's why I was asking you for references.

    And theoretically there are some hurdles, too.

    "People think that 2PC is something they will never need"
    I'm not one of those people.

    And as a matter of fact business application often have to provide also mechanism for "application level recovery", or "compensating transaction", because there are tons of other appliction issues that can't be solved automatically.

    This is one case when you can avoid distrbuted transaction, although it would be easier to use it.

    Distributed transactions are more than "heavy", you can talk to DBAs to see why they are an operational issue also.
    It is very easy for developers to oversee that, because in general, transactions do succeed when you're in QA or development testing, and it's not their primary concern.

    Let me give you an example to illustrate what I mean:
    You go to the grocery store's POS.
    That sofware calculates the total, charges your credit card or ATM using an external service (possibly ending up on a legacy system), and after the charge was successful updates the current database.

    So, by today's standards and hype one would use EJBs, Connectors and of course, distributed transactions, which would be wrong wouldn't it ?

    That's what I meant when I said what I said.


    Cheers,
    Costin
  28. Costin,
    My point is not that IIOP is in absolut the best.
    Here the context is EJB and using RMI/IIOP or RMI/JRMP.
    Yes Costin, they are like apples and oranges in the above context. (However what Billy says is true about RMI/IIOP)

    I do not see JMS and SOAP resolving any issues at all.

    -Gianni

  29. As someone who thinks that EJB is completely overkill for 95% of all Java-based projects, I enjoyed this article tremendously.

    For distributed projects with cross-language clients, clearly CORBA/RMI/IIOP is the only way to go. And this for obvious reasons... there really isn't any feasable alternative.

    But for the great majority of Java-to-Java systems, clearly RMI is far superior, IMHO. It is easier to code. It is LESS code. It is easier to debug. An abstract transaction framework can be easily implemented for it which provides good ACIDity. Also, Java has an excellent security framework which is far superior than CORBA's.

    These things add up to quicker time to delivery, less defects, and less code complexity. Which in turn add up to LESS dollars spent on the project.

    I'm still not certain why the hype for EJB or RMI/IIOP is so huge, but for the great majority of problem domains, I see it as an anti-pattern. Maybe someone could explain it to me differently... I don't know. If a project doesn't require interoperability, why are we using this technology when clearly native RMI is far superior?

    Just my two cents.

    -- Rick
  30. Hi Rick,

    I think it's overkill for the remainder 5% also :)

    But you're a little bit unfair to CORBA, so I'm telling you what RMI doesn't have:

    1. Asynchronous invocations.
    2. One-way invocation.
    3. Invocation context.

    Of course you also have none of these when you use RMI-IIOP.

    And performance tests show that RMI is only 5%-10% faster than good ORBs, which is natural since it supports a lot more features.
  31. Costin,

    To be clear, I'm not slamming CORBA. I definitely think that it has its place. For interoperability, it definitely rules all.

    RMI (natively) doesn't have the items you mention, but frameworks and good design patterns can be used to perform exactly those functions. To be honest, I am surprised that Sun didn't put invocation contexts into RMI. This would save developers the time of having to develop frameworks to perform this function.

    But then again, Sun seems to do this with Java. They provide the barebones and let other software vendors develop application frameworks to give all of these bells and whistles.

    I'm glad I'm not the only one around who thinks EJB are overkill. Its good to see that I have company... heheh. :)

    -- Rick