Discussions

News: CSIRO publishes J2EE Application Servers comparision

  1. Australian reasearch group CSIRO has published an evaluation of J2EE servers Weblogic 6, Websphere 3.5.3, jBoss 2.2.2, Borland 4.5.1, Silverstream 3.7.1 and Interstage 3. The report must be purchased, but a free summary has been published in which Borland and BEA came out tied with top overall score, and JBoss received the lowest score for 'Scalability and Reliability'.

    Read Evaluating J2EE Application Servers.

    Threaded Messages (47)

  2. However, they did state that "In terms of performance, it should be first noted that all these products should provide acceptable levels of response times and throughput for all but the most demanding applications."

    It was good to see Borland in the #1 position performance vs. scalability grid - too bad this great app server is virtually unknown outside the techies.
  3. Of the people reading this thread, who is using INTERSTAGE? CSIRO compared six "leading" app servers, including Fujitsu INTERSTAGE, so I am curious about its market share.
  4. Jim Bell wrote:
    >CSIRO compared six "leading" app servers, including >Fujitsu INTERSTAGE, so I am curious about its market share

    They have a 26% market share in Japan, so it might be a significant player in Australia too. Never heard it mentioned here in Norway, though.

    (Source: http://software.fujitsu.com/en/INTERSTAGE/v4info/V4_FAQ.pdf)
  5. I could see the report early in the morning. But now evening (5:45 PST) I am unable to access the report.
  6. Wonder what kind of test applications were written? These results can be more justified only if we know about the test applications and the other configurations like OS, database if any etc.
    Hope the authors of the page in the their next review publish such kind of details too.

    Shiva.
  7. Peter,

    It's a shame that you resort to 'shot the messaenger' tactics.

    I'm afraid I must take exception at your attack on our credentials at CSIRO. The group that produced this report has people with extensive industry middleware experience, ranging from years at organizations such Microsoft, IBM, Unisys, to experience in (failed!) dot.coms and the defence industry. I personally have acted as an architect on a 200 person development project. This is not a group of rabbit killers!! We also consult regularly in this area to major organizations in Australia such as The Australian Stock exchange - I won't go on - it's all on our web page. But trying to deflect the reports comments by criticizing the skills of the authors ain't that smart in this instance. It detracts from your other comments. sorry :-}

    As for Gartner-style insights....well. Please tell me how often Gartner build a test case on 6 different app servers, tune the configs for each to achieve 'pretty good' performance, test this extensively, analyse the results, usually in conjunction with the development teams as we find issues that need resolving, and then publish the results for all to see. Our insights come from building, running and analyzing stuff - if these aren't deeper than Gartners/Ovums, whose insights come from reading documentation and talking to people, then there surely is something wrong with our approach :-}

    I'm happy to acknowledge limitations in what we've done. If you read the report, these are all documented in black and white, so they're understood. We'd love to reduce or remove these limitations, but if you know of anyone else doing deeper and more extensive evaluations of middleware technology, then we'd love to learn from them. Its an expensive and time-consuming exercise, believe me. And this is where CSIRO's scientific heritage kinda helps, and rather differentiates from gartner's, er..., lets say non-scientific approach :-}

    As for being slightly behind on a couple of versions, we're working on updating this - a major new report will be out by Xmas, and some updates will be out before then. As I said, this is time-consunimg stuff to do.

    Ian Gorton (ian dot gorton at cmis dot csiro dot au)
  8. Posted by Shiva Paranandi 2001-09-05 20:41:06.0.

    >Wonder what kind of test applications were written? These >results can be more justified only if we know about the >test applications and the other configurations like OS, >database if any etc.

    Very briefly...

    Relatively simple application server business logic, about 1000 lines of code in EJBs, implemented in two ways (focusing on testing the EJB container):

    1) session bean only, talking straight JDBC to database
    2) session bean facade, talking to CMP entity beans

    Database is Oracle 8.0.5, hosted on its own machine. App severs are tested on a single machine, and on a 2 machine cluster, for both the above EJB architectures, so we can compare session bean-only and CMP performance/scalability.

    Clients run on own machine, and each client fires off a continuous (ie no wait time) stream of transaction requests of a known (randomised) mix (ala tpc-c algorithm). Number of clients is varied from 100 to 1000, and throughput and response time measured (and checks done for no paging, etc).

    Machines are dual pentium 800Mhz, 1 GB memory, running NT 4, and 100mbit LAN. Identical machines and infrastructure configs are used for all tests, just the app server components vary. Next version of the report will run tests on 4 CPU, 4 GB Win 2K/Linux machines, using an 8 CPU database machine. This lab is being set up as I type.

    There are minor test variations, forced on us by the app servers. These are documented in detail in the report. We also spent a lot of time tuning the tests to get make them run fast (I hesitate to say optimally)- these configurations are documented.

    Hope this helps, brief as it may be...I'm sure you can guess where it's fully documented :-}

    Ian




  9. This is a good effort that is being undertaken. But what people would really like to see is how the EJB's on a particular application server would perform along with servlets/jsp's. There are a good number of users who i believe would need some kind of comparision of JMS on these servers too.

    Shiva Paranandi.
  10. Posted by Shiva Paranandi 2001-09-07 00:27:52.0.

    >This is a good effort that is being undertaken. But what >people would really like to see is how the EJB's on a >particular application server would perform along with >servlets/jsp's. There are a good number of users who i >believe would need some kind of comparision of JMS on >these servers too.

    We have the code for a servlet/jsp based version of our tests, we just haven't run it in anger yet. Time...

    We have developed a tool called JMSRack that allows automated load testing of JMSs with user defined test loads (configured in a GUI, not programmed). We're about to put this in to beta testing as a service available on the web - email doug dot palmer at cmis dot csiro dot au if you'd like to get more information - or see his paper at Middleware 2001 in Germany soon
  11. Hi Ian,

    I for one have my shotgun lowered... :-) I'm glad 3rd parties are publishing such performance comparison report. This is the 2nd one I've ever heard of over the past 2 years; the first one was an academic paper from a European university (name slips my mind right now).

    But I must question the LEGALITY of your report... I could have sworn vendors like BEA have made a big ado in their licensing about publishing numbers? You guys managed to slip through the loophole? Because your not based in the US? :-)

    Regardless, I welcome this as an impetus for even more 3rd party evalulation reports. Consumers have the right to know!

    Gene
  12. Gene,

    >But I must question the LEGALITY of your report... I could >have sworn vendors like BEA have made a big ado in their >licensing about publishing numbers?

    I believe Oracle still do, and MS did until Win2K was released. Others I don't know, but its never been raised as an issue. We've worked with, for example, BEA for 2 years, on Tux, WLE and WLS. They've known from the start what we've been doing and helped us along the way, and seen all results before they hit the streets. Same with other vendors, for differing time periods and levels of involvement.

    >Regardless, I welcome this as an impetus for even more 3rd >party evalulation reports. Consumers have the right to >know!

    thanks - this is exactly my belief too. I'd like to think its in everyone's interests, consumers and vendors, but I'm probably being naive given the religion that pervades this industry :-{

    it's weekend here...that's enough posts for one week!!
  13. Database is Oracle 8.0.5, hosted on its own machine.


    Which begs the question, which JDBC driver was used? WebLogic is integrated with a good quality Oracle driver whilst others (such as the open source JBoss) expect you to supply a driver of your choice.

    Did the servers that did not come with a quality Type 4 driver get lumped with Oracle's woeful default driver (classes12.zip)?

    Or did you configure the App Servers that have bundled drivers to ignore their own driver and use a common one of your selection?
  14. Posted by Peter Daily 2001-09-07 01:19:24.0.
    >> Database is Oracle 8.0.5, hosted on its own machine.

    >Which begs the question, which JDBC driver was used?

    Peter - its in the report :-}
  15. Keep in mind that when we say the "Australian Research Group CSIRO" we are not talking about a Gartner-style private research organisation. CSIRO is an Australian government organisation that traditionally looks into things like wombat droppings. Note current stories from the CSIRO homepage:

    * Forecasting tomorrow's air quality in your own suburb
    * Making better dough in Canberra
    * Low rainfall - high value wood
    (Bet you didn't know Canberra is Australia's capital city ;-) )

    In any case I would not be rushing off to throw out your current app server based on CSIRO recomendations. I would like to see the report body, however the concerns I would have just from looking at the executive summary are:

    1. Not to be picky, but not alot of time got spent knocking up that HTML. Hopefully that doesn't reflect on the effort gone into the 'research'.
    1. The available summary is very brief if you discount the rather wordy biographies at the beginning, and frankly it's not very well structured.
    2. The results are presented in a very simplistic manner. What types of app/load/OS's? No hint is given to the meaning of the 1-5 scales or for the rather non-descript categories. I mean 'System Management'...what exactly are we talking about here? No I don't want to have to hand over the $$ to have to find out ;-)
    3. Given that this is fast changing technology in the IT industry it would have been a nice gesture to research current app server versions. For example:

    * Webshere is current 4.0, research was done on 3.5.3
    * WebLogic is current 6.1, research was done on 6.0
    * JBoss is currently 2.4, research was done on 2.2.2
     

    Only positive thing I'd have to say is that at least it's not one of those dodgy Oracle/BEA sponsored 'research' reports that oddly enough put their own product tens times ahead of the pack.








  16. I did see the report. The paper took well over six months to make sure that every actual vendor (I'm not sure how JBoss was dealt with) was given the opportunity to make sure they had the best figures possible.

    The front pages as shown in the URL are just exports from the Word document it would appear and the research is quite through.

    They put a lot of work into it.
  17. Peter,

    If you check out the link in the first mail, you would see that the group that did the study was Software Architectures and Component Technologies (SACT) a part of CSIRO Mathematical and Information Sciences (CMIS).

    The page also gives the biographies of the primary authors - and their work did not seem to include studying wombat droppings ;)

    But anyway, the user community should start pushing AppServer vendors to start releasing ECperf figures soon. Those benchmark results should give an indication of a vendor's adherence to standards, reliability, performance and scalability of their product offerings.

    And I would look suspiciously at any vendor that scoffed at ECperf or refused to release figures. But the new couple of months will tell.

    -krish
  18. Krish,

    >But anyway, the user community should start pushing >AppServer vendors to start releasing ECperf figures soon. >Those benchmark results should give an indication of a >vendor's adherence to standards, reliability, performance >and scalability of their product offerings.

    My only concern with ECperf is that it may be destined to follow the same path as tpc-c. Look at the tpc-c benchmark results on tpc.org, and they're basically irrelevant to 99.9% of users. I realise ECperf is different in having a mandated code base, but assuming vendors are free to choose their own hardware to run the tests on, this may negate some of the benefits.

    Ian
  19. While the comments made are definitely reasonable (I'd also like to know more about the test-methods, the applications used etc.) the report summary overally comes very close to my experience, as to say Borland AppServer is really excellent in terms of compatibility, usability and performance while WebLogic comes second. I also like WebSphere for its scalability and performance, but the compatibility is bad.
    And jBoss... well, it is nice, but definitely not ready for prime time at the moment. We'll see V3.
    No experiences with Interstage though, not much experiences with SilverStream.

    So overally the report seems to be "well done" (the results seem reasonable to me).

    kind regards

    Messi
  20. I cannot believe that the deployment and development for Jboss got a score of 1. C'mon I have used weblogic 4.5, 5.x and 6.x and I know how difficult it is to deploy things. The Jboss is the only J2EE container which facilitated smooth deployment. I wonder what kind of testing and factors they used to analyze the deployment and development.

    Swami
  21. . C'mon I have used weblogic 4.5, 5.x and 6.x and I know how difficult it is to deploy things


    How can you say it's hard to deploy something on wls 5.x or 6.x????
  22. yeah deployement in WebLogic 5.x is not that bad at all.

    interestingly websphere v3.5 got 3.5 on deployment. 4.0 changes that all. packaging and deployement in 4.0 is easy and very intuitive, in fact the best i have seen in app servers.
  23. Can you say ejbc.
  24. Since when are scalability and reliability in the same category?

    They clearly aren't familiar with JBoss, which is known to be extremely reliable and which leads the market in download numbers per month, and whose hot-deploy features were the first on the market for developers.

    This is just a publicity stunt by people who don't know what middleware is about.
  25. Since when are scalability and reliability in the same >category?


    basically in terms of a deployed system, you get scalability and enhanced reliability/availability through clustering, load-balancing and scalability. Our categorizations may not be perfectly combined, but its all explained in the report body

    >They clearly aren't familiar with JBoss, which is known to >be extremely reliable and which leads the market in >download numbers per month, and whose hot-deploy features >were the first on the market for developers.

    afraid we didn't evaluate download numbers per month, we'll leave that to marketing and other analyst organizations

    >This is just a publicity stunt by people who don't know >what middleware is about.

    This messenger's taking a lot of bullets today :-}

    Its incredible the emotive responses that this report has raised. I guess it just amazes me how many individuals there are out there who have obviously extensively worked, and must do every day, with all the 6 app servers we tested and evaluated, and who know more than a team of 6 who spent 6 months working closely with 5 vendors to produce this. How you guys keep current on every feature of all these app servers just amazes me :-}. Why don't you all write analysis reports and enlighted the world - it would save us all a lot of work.

    The report is no doubt not perfect. But if its 95% correct, and I believe it is, then its 94% more correct than anything else I've seen of this kind. And it'll be 100% correct soon!! If you know of better, I'd like to be shown where to look.

    Ian
  26. basically in terms of a deployed system, you get >scalability and enhanced reliability/availability through >clustering, load-balancing and scalability. Our >categorizations may not be perfectly combined, but its all >explained in the report body


    Frankly I am less than impressed. You are talking about fail-over and the fact that a system is always available from "failing-over" in case of a problem which is very different from the "reliability" of one system.

    In fact you can build a reliable system from un-reliable ones with fail-over.

    From my experience JBoss has been extremelly reliable, more reliable in fact than many of the systems you covered, especially WebSphere.




  27. In fact you can build a reliable system from un-reliable ones with fail-over.


    Unfortunately real systems fail for reasons other than bugs in application or infrastructure code. (eg hardware, network, heisenbugs, operating systems). A system that has to be truly reliable has to recognise this, and be architected to cater for such circumstances, which basically means assuming all components are inherently unreliable. Its a lot easier when your infrastructure supports clusters, load-balancing and failover to do this. Try building a seriously big system without these features, and you'll see what I mean. This is what this evaluation point is getting at -and I'd be happy to try to modify the explanation to make it 100% clear.

    I really wish JBoss had these features so we could give it a better ranking. I'm afraid it doesn't tho, and I don't think criticizing us and the report for pointing this out is really very productive. I'm looking forward to testing v3.0 (?) when these features are available.

    >From my experience JBoss has been extremelly reliable, more reliable in fact than many of the systems you covered, especially WebSphere.

    Ours too - it scores as well as any, and better than most, in our evaluation for robustness during development and testing. We based this on how many obstacles we hit in trying to get our code running, and running fast, with each product, so it would include bugs, features (!), unexplainable behaviour, etc. Basically JBoss was pretty simple in this respect.

    I hope this helps,

    Ian
  28. From my experience JBoss has been extremelly reliable,

    > > more reliable in fact than many of the systems you
    > > covered, especially WebSphere.

    > Ours too - it scores as well as any, and better than most,
    > in our evaluation for robustness during development and
    > testing.

    Yes it was quite amusing to read that JBoss rated '1' for "Scalability & Reliability". Anyone who's used JBoss knows it's a solid product. But since it doesn't yet support SSI Clustering it get's a '1'? Possibly the folks at CSIRO should rename this column to "supports clustering? yes/no" rather than giving the false impression that JBoss isn't reliable.

  29. Possibly the folks

    >at CSIRO should rename this column to "supports >clustering? yes/no" rather than giving the false >impression that JBoss isn't reliable.

    There's much more to clustering than warrants a yes/no answer. Its the vehicle for building highly scalable and reliable/available systems.

    If the word 'reliability' is the core of your objection, I'd be happy to modify this, as long as it still captures the essence of the comparison across products. Email me and we can discuss this...

    ian.gorton@cmis.csiro.au
  30. JBoss and Borland[ Go to top ]

    Part of the problem with JBoss's acceptance is history. It did not reach the necessary level of maturity until well after BEA and IBM had made themselves entrenched in the market. When we were deploying J2EE applications to Weblogic, JBoss had no working binaries for download, and the sources were far from complete. The same goes for Borland. No matter how good Borland's application server is, and no matter how good JBoss is or becomes now, BEA and IBM will continue to lead the market for quite some time to come.

    It is interesting that one of the posts talks about limitations of Weblogic that have been solved now for two years. Remember that people will make the same mistakes in judging JBoss and Borland, and it is much harder for the smaller and later players to make up ground once a marketplace begins to gel. The JBoss benefit is its "free" status; I think that will help it grow its base over time. Borland on the other hand faces an extremely steep uphill battle. Borland would have to be significantly more than "a little better in performance" etc. to even attract the slightest attention at this point. It's not fair, I don't like it, but it is the way the market works.

    As for Silverstream, it is odd that they even showed up in the review. Like Borland, they have good integration between development and the application server, however they didn't tack tightly enough to the standard (in this case J2EE) to get any real momentum from it. Silverstream is a fine product, but it doesn't have any traction in the J2EE space, and it failed to carry the unbelievable "Powerbuilder momentum" into the Java world as many of us expected that it would.

    As far as deployment ease, Websphere and Weblogic will continue to trail here for some time. It takes a lot of "special sauce" to make these platforms look "easy", and that sauce is typically found with many hours of elbow grease (and frustration) on the part of the application developer. Both of these products continue to improve in this regard (Websphere for example couldn't have gotten any worse!), but products like Orion and Resin have "been there" for quite some time already.

    It would have been nice to see iPlanet, Oracle (nee Orion) and Sybase included in the review. Those are all at least considered in the server selection processes of many companies. The inclusion of Silverstream and Borland, while nice for reference, had little value IMHO.

    Peace,

    Cameron.
  31. JBoss and Borland[ Go to top ]

    Cameron,

    >entrenched in the market. When we were deploying J2EE >applications to Weblogic, JBoss had no working binaries for >download, and the sources were far from complete. The same >goes for Borland. No matter how good Borland's
    >application server is, and no matter how good JBoss is or >becomes now, BEA and
    >IBM will continue to lead the market for quite some time
    > to come.

    When we first started looking for EJB capable AppServers
    to evaluate (close to two years ago now), we had problems
    finding ones that were EJB 1.1 compliant. Borland (Inprise then) was one of the few EJB 1.1. compliant app servers available, and we started evaluations using version 4.01
    from memory.

    Paul.
  32. "Its incredible the emotive responses that this report has raised. I guess it just amazes me how many individuals there are out there who have obviously extensively worked, and must do every day, with all the 6 app servers we tested and evaluated, and who know more than a team of 6 who spent 6 months working closely with 5 vendors to produce this. How you guys keep current on every feature of all these app servers just amazes me :-}. Why don't you all write analysis reports and enlighted the world - it would save us all a lot of work."

    Welcome to theserverside.com! With the exception of a handful of contributors, the majority of comments on all topics amount to unqualified, subjective and emotional drivel. If there was a rating scheme like slashdot, most of these comments would be moderated down to -1 and we wouldn't have to read them. As it is, the best we can do is ignore them.

    (Of course, this probably applies to my comment too)
  33. "Welcome to theserverside.com! With the exception of a handful of contributors, the majority of comments on all topics amount to unqualified, subjective and emotional drivel. If there was a rating scheme like slashdot, most of these comments would be moderated down to -1 and we wouldn't have to read them. As it is, the best we can do is ignore them"

    thanks!!

  34. I guess it just amazes me how many individuals there are

    >> out there who have obviously extensively worked, and
    >> must do every day, with all the 6 app servers we tested
    >> and evaluated, and who know more than a team of 6 who
    >> spent 6 months working closely with 5 vendors to produce
    >> this. How you guys keep current on every feature of all
    >> these app servers just amazes me :-}. Why don't you all
    >> write analysis reports and enlighted the world - it
    >> would save us all a lot of work."

    Hmm I do detect an ounce of sarcasm there. Well I would contend that the prospect of investing 3 person-years worth of effort into a report that is 6-months out-of-date by the time it gets released isn't universally appealing. These "individuals out there" may be mindful of launching into long winded research efforts in the software industry, given that such research often takes longer that the life-cycle of the products involved.

    > Welcome to theserverside.com! With the exception of a
    > handful of contributors, the majority of comments on all
    > topics amount to unqualified, subjective and emotional
    > drivel. If there was a rating scheme like slashdot, most
    > of these comments would be moderated down to -1 and we
    > wouldn't have to read them. As it is, the best we can do
    > is ignore them.

    Indeed, all this free speach and challenging of other's conclusions can only be counter-productive, right Comrade ?

    These CSIRO chaps are scientists, they should revel in the opportunity to defend their research on its merits ;-)



  35. These CSIRO chaps are scientists, they should revel in the >opportunity to defend their research on its merits ;-)


    we do, mate, we do....and I'm a technologist :-}
  36. Posted by Peter Daily 2001-09-09 00:04:10.0.
    >Hmm I do detect an ounce of sarcasm there. Well I would >contend that the prospect of investing 3 person-years >worth of effort into a report that is 6-months out-of-date >by the time it gets released isn't universally appealing.

    As I'm sure you'd acknowledge, the first time is the hardest, and from now on it will be much easier to keep up with new releases and versions. Look out for the inclusion of JBoss 2.4 and INTERSTAGE 4 in the next few weeks. A major upgrade for WLS 6.x, BAS 5.0, Silverstream 4, WebSphere 4 (still not fully release on NT I believe - ie single server version only when i checked a couple of weeks aga), and possibly new inclusions from other major vendors, is planned for January.

    So while we may be slightly behind with a couple of products right now, this'll be rectified real soon now. Something I'm sure you'll commend...:-}
  37. Up-to-date results[ Go to top ]

    Peter,

    One of the more interesting aspects of this work
    is working with some of the products for relatively
    long periods of time. We've been able to track
    performance differences across a number of versions of
    products for over 18 months now.

    We have observed that for most products the base performance
    doesn't improve in great leaps and bounds from one version
    to the next. In some cases the performance has actually dropped! Sure, there are incremental improvements, and in some cases vendors have been able to either fix bugs or
    find performance enhancements as a direct result of our
    testing and close interaction with them resulting in bigger jumps in performance - in these cases
    we've had access to the newest production versions of their products well in advance of the general public.

    Functionality also tends to increase incrementally, with just a few extra features per release. Our methodology
    allows us to track these changes relatively easily.

    Paul.
  38. You are saing you worked closely with 5 vendors. Did you really? What kind of support did you get from, for example, BEA or IBM? Who was setting up and tuning their servers?

    In any case, great job!
  39. Posted by George Northon 2001-09-07 12:47:48.0.
    >You are saing you worked closely with 5 vendors. Did you really?

    sure did..why do you doubt me?
  40. Greetings,

    I am glad that somebody or some organization could do such a test. I wish it could be more. I would say that a result of the evaluation is close to truth. I used before WebLogic 4.5.1/5.1/6.0, Borland App Server 4.0, JRun 2.3.3/3.0/3.1, Orion 1.5.2, tried SilverStream 3.7, tried ATG Dynamo 5, and tried iPlanet. Actually, I did evaluation for my company so I had a chance to look on some of the Application Servers.

    I agree with results of the evaluation. I think one thing is missing here - evaluation of the Web Containers. From my point of view – Borland is best in most categories (I mean not only in Java and EJB); but, if I am not mistaken, BAS 4.X has weak Web Container, that slows performances of the whole App Server pretty good. By the way, Borland provides solution to integrate BAS with other application servers like JServ or JRun to solve this problem, but it is going to be two different JVM, etc, so still there are some weaknesses in the performance.

    Also I would disagree with results from Development and Deployment column. Pair of JBuilder EE 4.0 and BAS 4.5 performs much better then Visual Café EE 4.0 and WebLogic 5.1 or Visual Age for Java 3.5 and WebSphere. To run Visual Café and WLS on the same workstation you have to have 512 MB of RAM plus good CPU; Visual Age by itself will eat a lot of resources (may be I am wrong about Visual Age, I used it maybe year and a half ago). And what about EJB hot deployment? WLS does not support it at all.

    Also I would like to see another column in the result table “Bug Free”. I think WebLogic will have the worst score (10 service packs for single release!).

    Best regards,

    Taras
  41. And what about EJB hot deployment? WLS does not support

    >> it at all.
    Not true. It does support hot deployment - in that you deploy via the console or drop a jar in the directory and voila it is deployed.
    Unfortunately, it UNdeploys before it deploys, so I am not sure how valuable this is in a production environment.

    To be honest, I am not sure how valuable/secure the whole concept of hot deployment is in a production environment. It is definitely required for development though...

    >> Also I would like to see another column in the result
    >> table Bug Free. I think WebLogic will have the worst
    >> score (10 service packs for single release!).
    I am not sure the number of service packs is a reasonable metric for judging how buggy it is. WLS 5.1 has been out for quite a while now (in this appserver market, thats quite a while), and the SP's are relased pretty frequently and not all the service pack contents are bug fixes either...

    Personally I would prefer lots of frequent (but small ;) service packs rather than put up with a bug for ages... or worse - the "you need to upgrade" line. I think it shows a pretty good level of support - especially considering that the current version is two major releases ahead.
  42. I would like to see the report. All I could find was the executive summary. Is the report available?
  43. Hi,

    I'd like to thank you for your work. Such work has been a big lack for the past 2-3 years.
    I have been desesparating to see something like this.
    I hope this is not the last time we see such a App server/Web server benchmarking.
    I hope also that it will stay market-independant as long as possible.... (Am I naive ???)

    I would have been pleased to see similar work about ERP
    as well (EIS system in general).
    But this is a big job !!!

    I would have been also glad to see something about the App server orion 1.5.2 from IronFlare ...
    Maybe you have planned to do so in the close future ...!?!
    Anyway, if anyone has a good experience about Orion, WLS and BAS, it would be very nice to get benchmark info about the features of these 3 App servers.

    So thanks again and keep on testing.

  44. Posted by Robert Nicholson 2001-09-07 04:15:32.0.

    >I would like to see the report. All I could find was the executive summary. Is the report available?

    Afraid the report is commercially available, email me for details..

    ian.gorton@cmis.csiro.au
  45. I don't understand the scores. How can SilverStream score 5 on J2EE Support(highest) and 3.5 on EJB Support (second lowest)?
  46. Nabil,

    Yes, good question - the best answer is to read the
    complete report (from www.cmis.csiro.au/adsat).

    The quick answer is that the rankings are based on
    the evaluation of a complex set of lower level features,
    some are directly related to the J2EE/EJB specification,
    others are not mandated by the specs but up to the vendors.

    Paul.
  47. I am surprised that iPlanet doesn't even figure in the comparison list. Aren't there any iPlanet supporters and implementors at all? I have used Weblogic and iPlanet. Though iPlanet cannot match Weblogic in its present form, it does definitely need a mention among the "top 6".

    --Chidu.
  48. Chidananda,

    We actually examined more products than appeared in the final report. For various reasons we didn't pursue
    them all in their current form. Some just didn't work,
    others fell over under load, etc. The version of iPlanet
    available about 6 months ago had relatively poor EJB
    support, and worse tool support for EJB deployment.
    The release notes that came with it documented numerous
    manual tasks required to develop and deploy EJBs - far
    more than other comparable products that we were familiar
    with.

    However, we did try and get our application going on it,
    (with Sun's help), but without much success. We decided
    to wait and see what the next version would be like.

    It's possible we'll include iPlanet 6 in the next round
    of evaluations.

    Regards,

    Paul.