Sun responds to Microsoft on WSTest

Discussions

News: Sun responds to Microsoft on WSTest

  1. Sun responds to Microsoft on WSTest (39 messages)

    On June 23, 2004, Sun published a whitepaper showcasing superior JavaTM 2 Enterprise Edition (J2EETM) web services performance when compared to Microsoft's .NET. See http://java.sun.com/performance/reference/whitepapers/WS_Test-1_0.pdf for the original whitepaper.

    Microsoft published a response on July 14th at http://www.theserverside.net/articles/showarticle.tss?id=SunBenchmarkResponse refuting our claims and stating that in their tests .NET performance was higher than Java Web Services..

    Issues with Microsoft's Response

    In their response, Microsoft make several statements that we shall examine here :

    Database interactions

    Microsoft claims that Sun's tests are over simplistic and do not provide customers with useful information because they do not include any database interactions.

    In our paper, we made clear that the purpose of the test was to measure the performance of the web services infrastructure. We quote : “To avoid the effects of other platform components, the web service methods perform no business logic but simply return the parameters that were passed in.”
    WSTest is a micro-benchmark with a specific purpose to measure the basic performance of the underlying web services infrastructure.

    Lightweight application server

    Microsoft claims that by using Tomcat instead of a commercial J2EE application server, the results are somehow irrelevant.
    Our purpose in using Tomcat (which is bundled with JWSDP) was to make it easier for a customer to run the benchmark. Considering that this benchmark makes very light use of web server functionality, we believe that the web server used is immaterial. Note that Tomcat is used only as a http server; the actual web services functionality is implemented within JWSDP.

    Doculabs benchmark

    Microsoft makes reference to the Doculabs web services benchmark report that was published more than a year ago. This report is irrelevant for the purposes of this discussion as it did not test any J2EE compliant web services implementations, as at the time of publication, there were no such products. (The products tested were not JAX-RPC 1.1 compliant).

    Tuning

    We discovered that we did not increase the number of network connections on the .NET client side. We added the following to machine.config, and re-ran the tests using J2SE 5.0 Beta2 and found that the measured throughput and response times obtained are still superior to the .NET results

    Conclusion

    The only way to see eye to eye on a benchmark, is if it is developed by consensus in an industry standards organization that is experienced in developing benchmarks. We would like to once again invite Microsoft to participate in such an effort at SPEC. See http://www.spec.org/benchmarks.html#esp and http://www.eweek.com/article2/0,1759,1522049,00.asp.

    Dennis MacNeil

    Senior Product Marketing Manager
    Java2 Platform, Enterprise Edition
    Sun Microsystems

    Threaded Messages (39)

  2. entertaining SOAP opera[ Go to top ]

    where else can you get such entertaining non-sensical melodrama. Gotta love the flame war after someone posts benchmarks claiming superior performance.
  3. Java always better than .Net[ Go to top ]

    From our experience Java is more mature,scalable platform than .Net. For Mom or Pop shop .Net VB was a good option but now they are moving to linux servers and running Tomcat java favours them most.
  4. Java always better than .Net[ Go to top ]

    From our experience Java is more mature,scalable platform than .Net. For Mom or Pop shop .Net VB was a good option but now they are moving to linux servers and running Tomcat java favours them most.
    I resent that. Cobol is far more Mature than JAVA.
  5. Java always better than .Net[ Go to top ]

    I resent that. Cobol is far more Mature than JAVA.
    Lets do a Cobol/Java webservices benchmark then :-)
  6. Re: Java always better than .Net[ Go to top ]

    I resent that. Cobol is far more Mature than JAVA.
    Lets do a Cobol/Java webservices benchmark then :-)
    Great! Why not make a Cobol.NET version of the benchmark? Possible, yes. Good idea? Err, anyone still knows Cobol? ;)
  7. Re: Java always better than .Net[ Go to top ]

    Good idea? Err, anyone still knows Cobol? ;)
    Yes, I do. But ..
    a. Don't tell anyone
    b. I don't know COBOL.Net. :)
  8. Java always better than .Net[ Go to top ]

    Lets do a Cobol/Java webservices benchmark then :-)
    That's possible. Micro Focus (www.microfocus.com) enables you to expose your cobol programs as web-services. But..... as they are very close with Microsoft lately, I doubt if they want to cooperate.
  9. SAN FRANCISCO--Novell began selling version 9 of its flagship Linux product on Tuesday, adding Java server capabilities the same day that rival Red Hat made a similar move.

    Novell's SuSE Linux Enterprise Server 9 includes JBoss' application server, a software package that enables a computer to run Java programs, said Chris Stone, Novell's vice chairman, during a news conference here at the LinuxWorld Conference and Expo.

    On the same day, Red Hat debuted an application server of its own, based on the Jonas Web server from the ObjectWeb Consortium.

    The moves highlight efforts by Linux software companies to expand their product lines beyond just an operating system. It also means Novell and Red Hat will compete more with established application server sellers such as BEA Systems, IBM, Oracle and Sun Microsystems.

    Novell will provide first-, second- and third-level support for the JBoss application server, Stone said--in other words, everything from standard technical support to necessary engineering changes. And in coming months, Novell will gradually move to JBoss as the core engine of its Extend software for setting up Internet portals and tying together customers' disparate applications.

    SLES 9 is the first high-end Linux product to come from Novell since it bought SuSE Linux in January for $210 million. The software uses the new 2.6 version of the Linux kernel; its prime competitor, Red Hat Enterprise Linux, still uses 2.4 with some modifications from 2.6.

    http://news.com.com/Novell+launches+new+Linux--with+JBoss+Java/2100-7344_3-5295548.html?tag=nefd.top
  10. What happened to silverstream's app server? Novell spent $212 million to acquire them and now they're turning to JBoss?
  11. exteNd = deadeNd ?[ Go to top ]

    Didn't Silverstream turn into Novell exteNd ? Looks like Novell deadeNd now.
  12. What happened to silverstream's app server? Novell spent $212 million to acquire them and now they're turning to JBoss?
    Probably realized (too late) that it didn't play MP3s or print on HP photo paper :)
  13. Sigh[ Go to top ]

    "Our purpose in using Tomcat (which is bundled with JWSDP) was to make it easier for a customer to run the benchmark."

    Go ahead and make the test downloadable then.
  14. Sigh[ Go to top ]

    That's the main issue Microsoft had with the original benchmark. It didn't include any source or documentation to reproduce the results for independent verification. They were able to get results that matched the java numbers but could not determine how Sun got the .net numbers.

    Say what you will about Microsoft, but they provided everything possible for anybody to re-run the tests on their own machines - for both java and .Net.


    Note: still no source or documentation from Sun.
  15. Our purpose in using Tomcat (which is bundled with JWSDP) was to make it easier for a customer to run the benchmark
    Has i would be interested to see the Sun benchmark source code. Does anyone have a link for it?
  16. try looking
    http://java.sun.com/developer/codesamples/webservices.html#Performance
  17. Since Sun provided source code of tests here:
    http://java.sun.com/developer/codesamples/webservices.html#Performance
    It would be good if TSS rerun those tests (possibly from both sources, Sun and MS) and finally give us a winner :))
  18. Yes i would encourage TSS to conduct tests, and also evaluate the stack both implement for example WS-I Security and other specs. Carrying a call from place A to place B isn't gonna show anything.

    Also let us know what kind of interceptor framework is supported by both implementation. A side by side comparision of stack and services are very important to get a good picture.

    Mehul
  19. Stack List[ Go to top ]

    Just to name few,

    WS-Eventing,
    WS-Addressing,
    WS-Security, and
    WS-ReliableMessaging
    WS-Policy,
    WS-Discovery

    Also note that UDDI might be implemented on microsoft side.
  20. SOAP Wars[ Go to top ]

    I have an idea that'll settle the score. Have an "echo war" between servers.

    Each server gets to fire off volleys of SOAP messages to the other and ramps up the number of messages/second. The first server to fail to echo a response loses ;)

    This just gets better and better.

    Oz
  21. SOAP Wars[ Go to top ]

    I have an idea that'll settle the score. Have an "echo war" between servers.Each server gets to fire off volleys of SOAP messages to the other and ramps up the number of messages/second. The first server to fail to echo a response loses ;)This just gets better and better. Oz
    I'd glady buy seats to see what happens. Now, if only there was a monitor for IIS so we can see the load periodically. I know you can use the system performance monitor, but many of the measurements do not return real values. but it's probably user error, since none of the ASP or .NET counters were able to return the number of active threads handling requests. that or it really those counters only return 0 or 1 and do not return the real count of active processing threads.
  22. Dennis,

    Good to see you finally posted the code for your benchmark (albeit two months later), and that you understand the mis-applied settings for maxnetconnections in the .NET client driver program. However, you did not post what your new setting was, and you did not post your new results that you mention for your corrected version of the benchmark that you claim still offers better performance than .NET. Why not? I would be interested to see these new results and how much better the perf was for .NET in your corrected implementation vs. your original results. Some other comments:

    -We stand by our results with our posted code, and since you don't dispute our findings with our implementation of your benchmark suite, we assume you agree with them. Is this true? These results clearly show .NET outperforming the shipping JWSDP 1.4 in most tests, with the performance difference widening in .NET's favor as the message payload is increased. The differences we find could be explained by the below points.

    -Interesting to see you tested with a newer beta version of JWSDP (1.5 beta 2) whereas in your original tests you used shipping code. We tested with all shipping product (.NET 1.1.). Why did you decide to use beta product for the re-test?

    -You used two different driver implementations in your tests (a .NET client for the .NET implementation, and a different Java client for the Java implementation). I would argue that if you want to compare the perf of strictly the backend web services as your paper claimed to do, you need to use the same benchmark driver program to test both implementations. This would be the correct testing methodology to keep everything outside of the system under test (or "SUT" in benchmark geek-speak) the same. Did you test the .NET web services with the Java driver and the Java web services with the .NET driver? Would like to see what you get when you do. Would also verify basic .NET/J2EE interop over the standard SOAP protocols as we did with our tests.

    -Your newly published code includes a Web.config file for the .NET Web Service app. Assuming the web.config file you published (and hence you are encouraging customers to use to verify the results) is the one you used in your tests, there are a couple of issues. First, you have the compiler set to "debug"; second, you have not properly adjusted http authentication to match JWSDP/Tomcat. To do so, you should set the authentication mode to "None" for .NET authentication, since you are doing none on the Java side. Testing .NET with a higher security setting than J2EE would be an improper test.

    -IIS also has an http authentication setting for the web service virtual directory which defaults to windows integrated authentication (kicks in when the client is also Windows). Since JWSDP/Tomcat does not have any such authentication setting between client and web server, you should turn off IIS authentication for the web service virtual directory (done simply via the IIS admin console by right clicking the virtual directy, brining up properties sheet, and unchecking 'windows integrated authentication'). This might also impact your .NET results.

    -You make no mention of testing larger message sizes, as we did, which showed .NET outperformking JWSDP 1.4 by wider and wider margins as the message payload was increased. Did you test larger message sizes and what did you find?

    -As for standards-based benchmarks, these are fine if they add customer value. The article you pointed to in eweek clearly explains why we decided not to participate once pricing and price-performance was removed as a mandatory disclosure with results, and I also encourage customers to read it:

    http://www.eweek.com/article2/0,1759,1522049,00.asp

    -We have not yet had the chance to go over your .NET code, but will do so in the hopes of pointing out any potential perf differences between our implementation and yours, since these might be useful for the general public if they exist.


    Greg Leake
    Microsoft Corporation
  23. You make no mention of testing larger message sizes, as we did, which showed .NET outperformking JWSDP 1.4 by wider and wider margins as the message payload was increased.
    Most J2EE folk won't appreciate the relevance of your point above. They learnt RPC with RMI, typicly used on a LAN without much cost for fine grained chatter. SOAP is more likely to be used on a WAN, where latency dominates throughput. Sun's web service blueprint notes "fine-grained service operations ... result in greater network overhead and reduced performance. So a realistic benchmark would use bigger messages.
  24. Interesting to see you tested with a newer beta version of JWSDP (1.5 beta 2) whereas in your original tests you used shipping code. We tested with all shipping product (.NET 1.1.). Why did you decide to use beta product for the re-test?
    Priceless. I've heard the stories about how Microsoft was doing builds of .NET 1.1 (not even beta builds) to try to eek out an edge against the J2EE app servers while the TMC performance comparison was going on, and now you're claiming foul!

    Thanks for the chuckle, though.

    Peace,

    Cameron Purdy
    Tangosol, Inc.
    Coherence: Clustered JCache for Grid Computing!
  25. a reminder[ Go to top ]

    I had thought that my mission was over here in TSS but I understand that I have to every other month or so to remind you that,

    In this tread we are discussing the latest test from Sun with Tomcat
    T-o-m-c-a-t.

    Nobody wants to hear anything benchmarkish from the "Elephant Java EJB Servers" area that are so far behind that nobody is interested. There is no need to eek out anything against something that in practice are 5-6 times as slow, not to speak of lousy uptime.

    So as it is August the 5th to day, next reminder will come around the October the 5th! :)

    Best regards
    Rolf Tollerud
  26. The Problem with .NET Generics[ Go to top ]

    http://www.osnews.com/story.php?news_id=7930
  27. back to the future[ Go to top ]

    I had thought that my mission was over here in TSS but I understand that I have to every other month or so to remind you that,In this tread we are discussing the latest test from Sun with Tomcat T-o-m-c-a-t.
    Should I remind you that this thread didn't exist before August 3rd? A-u-g-u-s-t! ;)

    Better yet, remind us when your don quixotean mission os done, please. P-l-e-a-s-e.
  28. not sure where you heard that[ Go to top ]

    Cameron,

    Not sure where you heard that, but its simply not the case. As the Middleware report indicates the final shipping version of .NET 1.1 and Windows Server 2003 was used for the Middleware J2EE/.NET app server shootout released one year ago, and believe me, no special builds were done for the test you refer to completed over one year ago (the product team has far more important things to keep them busy). As always you can download the code and test for yourself on the RTM versions of these products. SO whoever spread that rumor is simply wrong with a capital W. As for Sun using beta product for their re-test, I did think it worth highlighting since there original results were done with a shipping version. Nothing wrong with using beta, per se (assuming customers can download and use it to verify results), and they did point this fact out, but I was curious what they found when running with their latest released version (JWSDP 1.4).

    -Greg
  29. Where's the beef?[ Go to top ]

    Posted on TheServerSide.NET

    Here is a copy of the primary methods contained in the C# Web Service created for the Sun Java benchmarks.
    [System.Web.Services.WebMethodAttribute()]
    [System.Web.Services.Protocols.SoapDocumentMethodAttribute("http://www.sun.com/wstest/wsdl/echoVoid", RequestNamespace="http://www.sun.com/wstest/wsdl/", ResponseNamespace="http://www.sun.com/wstest/wsdl/", Use=System.Web.Services.Description.SoapBindingUse.Literal, ParameterStyle=System.Web.Services.Protocols.SoapParameterStyle.Wrapped)]
    public override void echoVoid()
    {
    }
            
    /// <remarks/>
    [System.Web.Services.WebMethodAttribute()]
    [System.Web.Services.Protocols.SoapDocumentMethodAttribute("http://www.sun.com/wstest/wsdl/echoStruct", RequestNamespace="http://www.sun.com/wstest/wsdl/", ResponseNamespace="http://www.sun.com/wstest/wsdl/", Use=System.Web.Services.Description.SoapBindingUse.Literal, ParameterStyle=System.Web.Services.Protocols.SoapParameterStyle.Wrapped)]
    public override void echoStruct([System.Xml.Serialization.XmlElementAttribute("struct")] ref Struct[] @struct)
    {
    }
            
    /// <remarks/>
    [System.Web.Services.WebMethodAttribute()]
    [System.Web.Services.Protocols.SoapDocumentMethodAttribute("http://www.sun.com/wstest/wsdl/echoList", RequestNamespace="http://www.sun.com/wstest/wsdl/", ResponseNamespace="http://www.sun.com/wstest/wsdl/", Use=System.Web.Services.Description.SoapBindingUse.Literal, ParameterStyle=System.Web.Services.Protocols.SoapParameterStyle.Wrapped)]
    public override List echoList(List s)
    {
    return s;
    }
            
    /// <remarks/>
    [System.Web.Services.WebMethodAttribute()]
    [System.Web.Services.Protocols.SoapDocumentMethodAttribute("http://www.sun.com/echoSynthetic", RequestNamespace="http://www.sun.com/wstest/wsdl/", ResponseNamespace="http://www.sun.com/wstest/wsdl/", Use=System.Web.Services.Description.SoapBindingUse.Literal, ParameterStyle=System.Web.Services.Protocols.SoapParameterStyle.Wrapped)]
    public override void echoSynthetic(string s, Struct str, [System.Xml.Serialization.XmlElementAttribute(DataType="base64Binary")] System.Byte[] bytes)
    {
    }
    Where exactly is the code? No web service would be created like this. So it appears to me that Sun didn't test web services but instead the ASP.NET pipeline. But did they optimize that?

    When a request comes into IIS with a .ASMX extension, IIS hands off the request to ASP.NET via the ASP.NET worker process. The ASP.NET worker process starts the application by running through the list of HttpModules stored in machine.config which supply services to the HttpHandler (ASMX is one of these). These services are available even if they are not used and therefore their creation and processing of each incoming request has a performance cost.

    This is the list of HttpModules that get loaded to handle web requests in a standard .NET implemention (from machine.config on my system).
    <add name="OutputCache" type="System.Web.Caching.OutputCacheModule" />
          <add name="Session" type="System.Web.SessionState.SessionStateModule" />
          <add name="WindowsAuthentication" type="System.Web.Security.WindowsAuthenticationModule" />
          <add name="FormsAuthentication" type="System.Web.Security.FormsAuthenticationModule" />
          <add name="PassportAuthentication" type="System.Web.Security.PassportAuthenticationModule" />
          <add name="UrlAuthorization" type="System.Web.Security.UrlAuthorizationModule" />
          <add name="FileAuthorization" type="System.Web.Security.FileAuthorizationModule" />
          <add name="ErrorHandlerModule" type="System.Web.Mobile.ErrorHandlerModule, System.Web.Mobile, Version=1.0.5000.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a" />
    Let's tick these off and see how, if at all, they would be necessary to service these anorexic web services.

    1. OutputCache - This is an awesome feature of .NET that allows even web service responses to be cached based on parameters just like a standard ASP.NET web page request. While including the CacheDuration property in the WebMethod attribute would have greatly improved the .NET performance they didn't use that in their tests. So, for true optimization this should have been removed.

    2. Session - This is the module for using SessionState. In the Sun specs they mention that they turned Session state off for the application. But they didn't tell the module itself not to load (unless the local config setting does this) and therefore they have added another layer to the ASP.NET chain.

    3. WindowsAuthentication - This is the module that performs authentication to an ActiveDirectory or Local machine user database. Since authentication was not used, there is no need to keep this module.

    4. FormsAuthentication - This is the forms mode of authentication which is also not being used.

    5. PassportAuthentication - Also not being used.

    6. UrlAuthorization - This is the module that is used to provide role based authorization to a URL resource. Authorization is also not being tested so this could be removed.

    7. FileAuthorization - Role based authorization to file resources, also not needed.

    8. ErrorHandler - This module is used to catch unhandled application exceptions by presenting that lovely yellowish error screen. Since this is a performance only test, this could also be removed.

    For more information on removing unneeded HttpModules check out Chapter 6 of "Improving Performance and Scalability in .NET Applications" under the "Short Circuit the Http Pipeline" heading.

    The sum of all of this is that after making NO real optimizations of the ASP.NET pipeline, Sun claims that Java performs better than .NET at running web services containing no actual logic. Why am I not impressed?
  30. Blog war[ Go to top ]

    I am waiting dennis to say something here......Dennis!! dun chicken out..
    Dun let those java supporters down..
  31. we have all info we need[ Go to top ]

    "Without the price/performance benchmark, as a CTO who truly cares about cost before performance, it is hard to understand what legitimate reason vendors could have for siding against the inclusion of price/performance information, other than trying to cloak the true cost of high-end systems."
    IBM officials would not comment.
    So what is it more to say? Is there anyone that thinks that they want to hide price/performance metrics that are good? :)

    Regards
    Rolf Tollerud
  32. Benchmarks don't measure 5 Nine's Reliability.
    ECC memory on the server is just the first step on the Big Iron systems.
    Also, soft error protection on the paths to and from the CPU are also a part of the design.

    A Performance/Cost metric doesn't measure this kind of server protection.

    So, a Real Metric would be(( Performance * Reliability Factor )/ Cost)

    Can't find a good link on this,
    but Scientific American had a good article on it, somewhere.

    http://www.geek.com/news/geeknews/2003May/bch20030515020006.htm
    http://www.irps.org/03-41st/ws/c_slayman.pdf

    I'd say your medical and financial community
    are two examples of where top quality servers are more important then ultra performance at the lowest cost.

    Of course, my personal web site runs on an Intel box.
  33. Benchmarks don't measure 5 Nine's Reliability.
    ECC memory on the server is just the first step on the Big Iron systems.
    Also, soft error protection on the paths to and from the CPU are also a part of the design.

    A Performance/Cost metric doesn't measure this kind of server protection.

    So, a Real Metric would be(( Performance * Reliability Factor )/ Cost)

    Can't find a good link on this,
    but Scientific American had a good article on it, somewhere.

    http://www.geek.com/news/geeknews/2003May/bch20030515020006.htm
    http://www.irps.org/03-41st/ws/c_slayman.pdf

    I'd say your medical and financial community
    are two examples of where top quality servers are more important then ultra performance at the lowest cost.

    Of course, my personal web site runs on an Intel box.
  34. 5 Nine's Reliability?[ Go to top ]

    “top quality servers?”

    Are you saying that for instance Weblogic has a better uptime statistic than Spring/Tomcat?

    "my personal web site runs on an Intel box"

    Does your personal server site run with cluster and failover systems too?

    Why don't we bury these expensive Sun boxes with expensive EJB servers once and for all? There is absolutely no use for it whatsoever.

    Regards
    Rolf Tollerud
  35. great fun[ Go to top ]

    I don't know about others, but I find this webservices shoot out funny. Let's get real here. If performance is critical, the first thing I would do is dump XML/SOAP and use something better. It doesn't matter which stack you use, since parsing XML hogs memory like a SR71 drinks jet fuel. Well that's not totally true. If you use XStream, Jibx or other Object-centric XML parser, the performance is acceptable. It still isn't cheap and eats plenty of memory and CPU. I stress tested XStream a couple months back, and it was much nicer on resource consumption. Compared to plain old java serialization, even XPP+XStream uses more resources. Compared to Crimson, Xalan and Xerces, XPP+XStream uses 3x less CPU and memory. Of course the mileage depends on the XML and size of the message.
  36. great fun[ Go to top ]

    If performance is critical, the first thing I would do is dump XML/SOAP and use something better.
    Precisely!
  37. great fun[ Go to top ]

    If performance is critical, the first thing I would do is dump XML/SOAP and use something better.
    Precisely!
    Me three!

    If one is using web services they are probably using them to communicate to another platform (ie Java -> .Net). If not, they more than likely should be using something else.

    Kind of like watching a turtle race.
  38. Not so fast....[ Go to top ]

    Both Java and .NET have technologies for distributed systems that are faster than Web Services. But, those are technology specific implementations and so we lose interoperability and then we're right back where we started.

    So, in an interoperable world performance is still a major concern. If two companies provide basically the same service available through web services, speed will be a competitive advantage.

    You may not worry about performance now, but the more we use web services the more important they will become. The question is will the network infrastructures be able to support all that crazy XML bandwidth?
  39. AXIS?[ Go to top ]

    How does this compare with AXIS?
  40. AXIS?[ Go to top ]

    How does this compare with AXIS?
    Axis is better than the older apache SOAP driver. Axis still uses xerces, so the performance is equivalent to xerces. Dennis Sosnoski has plenty of benchmarks comparing xml parsers. the last time I checked, most of the numbers from his last set of xml benchmarks still provide a good measurement. XPP3 is one of the best parser today.