MS publishes new .NET 2.0 benchmarks against Java Services

Home

News: MS publishes new .NET 2.0 benchmarks against Java Services

  1. TheServerSide.net has announced the publication of a new iteration of Comparing Web Service Performance and Comparing XML Performance, originally developed by Sun. The results show .NET serving raw requests much faster than JWSDP or IBM's web services implementation.
    For the web services tests, .NET 2.0 Beta 2 showed significant improvements over both the Java based versions as well as .NET 1.1 in every test. For example, in the EchoList test in which a linked list is sent from a web service, .NET 2.0 was able to reach an average throughput of 1022 transactions per second while JWSDP 1.4 and IBM Websphere were only able to reach 400 transactions per second. The EchoStruct test had similar performance metrics with .NET 2.0 averaging 992 tps and JWSDP and IBM WebSphere averaging only 400 and 416 tps respectively.
    The paper goes through some fairly detailed test cases, including the hardware used to issue the requests and the specific software tested.

    Microsoft tested .NET 2.0 beta, as opposed to a final product; they also tested JAX-RPC and IBM's web service implementation, leaving out alternatives. The test also specifically mentions that they only tested certain parts of the message (leaving out authentication, for example, as being out of scope.)

    Another aspect to the benchmark is that the web services test - more likely to affect "real world" programs than the XML test - is still affected by the XML test, which is itself reliant on DOM and SAX implementations in the JVM. If the XML implementation supplied with the JVM is inefficient, it establishes a baseline of poor performance that a web services implementation isn't likely to overcome. Therefore, it stands to reason that both tests are an indictment of JAXP more than JAX-RPC or the JVM as a whole.

    When asked to comment on the benchmark, Kirk Pepperdine, co-author of http://javaperformancetuning.com/, suggested that the one second delay between requests was a "step in the right direction," but adds that a constant delay can introduce further anomalies in the test data. Further, Mr. Pepperdine mentions the fairly simple JVM tuning ("-Xmx1024m -Xms1024m -server") as possibly being suboptimal, but no valid conclusion can be made without a more robust set of statistics and garbage collection tuning data.

    Dr. Heinz Kabutz, in "Who is pushing Java?," suggests that Java's rate of improvement may be slowing down, because of a lack of investment in performance research on the parts of IBM and Sun, with no replacement available in the foreseeable future outside of BEA.

    What do you make of the data? Are raw performance numbers like this actually relevant, especially when clustered deployments are quite common in high-throughput service sites? Do numbers like this affect service implementations outside of the realm of benchmarks?

    Threaded Messages (212)

  2. JAXP needs to be faster[ Go to top ]

    It is obvious that JAXP implementation (Xerces and Xalan) needs to be faster. If Microsoft managed to improve XML parsing/generation performance in .NET 2.0 nearly twice, I am sure that is possible with JAXP too.
  3. JAXP needs to be faster[ Go to top ]

    All tests were conducted on the same hardware, an HP DL858 2-processor AMD Opteron @1.8 GHz and 4 MB RAM with gigabit networking.

    I'm surprised the tests ran at all ...
  4. JAXP needs to be faster[ Go to top ]

    I am still waiting to see a major .NET application that Microsoft has released. VS.NET is not .NET, office is not .NET...can someone tell me where these applications are? If .NET is so great, why is Microsoft not developing software with it, why, why, why?
  5. JAXP needs to be faster[ Go to top ]

    "why is Microsoft not developing software with it, why, why, why?"

    Because compiled C/C++ (or even the new compiled D programming language) beats the hell out of both Java and .NET.
  6. JAXP needs to be faster[ Go to top ]

    "why is Microsoft not developing software with it, why, why, why?"Because compiled C/C++ (or even the new compiled D programming language) beats the hell out of both Java and .NET.

    No. As I have already showed with linpack benchmarks (which are robust, and not microbenchmarks), both .NET and Java can perform very well compared with compiled C/C++ - that link I provided showed both within 5-6% of C/C++ speed. That is statistically indistinguishable, and is very impressive for intensive mathematical operations.

    I'm sure there is some general term to describe continual assertion of an opinion even after repeated exposure to evidence to the contrary...
  7. MS stupid?[ Go to top ]

    Oh come on Steve, real life applications doesn't consist to a large extent of running small samples in a tight loop! You have at least admitted that C# is better suited than Java for desktop applications, right? But MS still clearly prefer C++. (Thank God) What can be the reason do you think? Stupidness?

    Regards
    Rolf Tollerud
    (that prefer the D language)
  8. MS stupid?[ Go to top ]

    Oh come on Steve, real life applications doesn't consist to a large extent of running small samples in a tight loop! You have at least admitted that C# is better suited than Java for desktop applications, right? But MS still clearly prefer C++. (Thank God) What can be the reason do you think? Stupidness? RegardsRolf Tollerud(that prefer the D language)

    Oh come on Rolf, you should know better. At least try and do a little research before you post. If you did you would know that Linpack is not small samples in a tight loop - it is an internationally accepted set of benchmarks for testing a wide variety of demanding mathematical operations and associated data structure manipulations. It is a good measure of some aspects of language performance.

    C# is just a language. It is no better than Java or Visual Basic or Pascal for desktop applications. What matters is the libraries. From what I have heard, .NET has good libraries for desktop applications. With Microsoft's help, Sun is evolving Java that way too. The JDIC API is a good example.

    I have no idea why why Microsoft is still predominantly using C++. I suspect that it is probably because they have hundreds of millions of lines of code already written using it. One thing I do know is that it is not due to performance.
  9. MS stupid?[ Go to top ]

    I suspect that it is probably because they have hundreds of millions of lines of code already written using it. One thing I do know is that it is not due to performance.

    Microsoft has proven in the past their capability of entirly rewriting an application from scratch (cf SqlServer history).
    No, sorry to tell you this buddy, but you have a performance impact when you go for Virtual Machines and such.
    The fact that you can customize the garbage collector in the next release of .NET is an example that generic doesn't fit all.

    It's just that the average joe doesn't need to improve his source code to make it go fast. So writing C++ or C# is of no difference, the thing is fast enough anyway.

    Ever try to code a MPEG/Divx encoder in C#? All military and high performance constraint environments will stay in C++/Assembler language.

    I can NVidia coming out with their device driver in .NET by the end of this summer ... cooool ...

    a++ Cedric


    ps: I'm not to sure about BizTalk being entirely coded in C#, probably depends on what you put in there ...
  10. MS stupid?[ Go to top ]

    I suspect that it is probably because they have hundreds of millions of lines of code already written using it. One thing I do know is that it is not due to performance.
    Microsoft has proven in the past their capability of entirly rewriting an application from scratch (cf SqlServer history).No, sorry to tell you this buddy, but you have a performance impact when you go for Virtual Machines and such.

    I have already posted links to show that this can be negligible. I don't understand why this point is still being debated.
    The fact that you can customize the garbage collector in the next release of .NET is an example that generic doesn't fit all.

    I don't understand this point. This customisation indicates that VM-based systems are versatile.
    Ever try to code a MPEG/Divx encoder in C#? All military and high performance constraint environments will stay in C++/Assembler language.

    This is not the case. There are already examples of VM-based Java used in embedded and real-time systems. Also, the military tend to avoid C++ and Assembler - they use safe languages like Ada instead.
  11. MS stupid?[ Go to top ]

    This is not the case. There are already examples of VM-based Java used in embedded and real-time systems. Also, the military tend to avoid C++ and Assembler - they use safe languages like Ada instead.

    Well, I guess it depends on the region of the world you're coming from.
    In Europe, for the military agencies I worked for, we were writing C/C++ code and assembly for high performance.
    I also know that in the aviation arena (especially in planes, satellites and so on) that they all write in C or Assembly language. Some of the companies even have their own proprietary operating system.
    Although I now write in Java more than anything else, when I do some coding, I have changed industry and am in a more "versatile" arena.

    As for the .NET CLR, I have seen it deadlock whilst garbage collecting server side every now and then under high load. Microsoft told us that it would be fixed in the 2.0 release, meanwhile, every now and then we just reboot the machine.

    I think that java and the .net platform are excellent for building quickly and efficiently applications. .NET excels in building GUI applications and java for server side stuff.

    I would also like to remind the folks doing java that in it's in days Digital offered a nice tool to optimize the binary code of your application after collecting runtime information.
    After several runs it would generate a binary element that would be optimal for the alpha processor on which it ran.
    In those days it wasn't Just In Time, it was I planned ahead in Time and optimized my application.

    a++ Cedric
  12. MS stupid?[ Go to top ]

    I also know that in the aviation arena (especially in planes, satellites and so on) that they all write in C or Assembly language.

    Certainly not all - for example 99% of all software in the Boeing 777 is in Ada. The assembler use is limited and highly specialised, as it has to be fully tested (all paths through the code for all ranges of input have to be analysed).
  13. MS stupid?[ Go to top ]

    I also know that in the aviation arena (especially in planes, satellites and so on) that they all write in C or Assembly language.
    Certainly not all - for example 99% of all software in the Boeing 777 is in Ada. The assembler use is limited and highly specialised, as it has to be fully tested (all paths through the code for all ranges of input have to be analysed).

    I was still sticking to the European part of Aviation (Airbus for example).
    I know that ADA is popular in America, but I am not aware of any ADA projects in Europe.
  14. MS stupid?[ Go to top ]

    I also know that in the aviation arena (especially in planes, satellites and so on) that they all write in C or Assembly language.
    Certainly not all - for example 99% of all software in the Boeing 777 is in Ada. The assembler use is limited and highly specialised, as it has to be fully tested (all paths through the code for all ranges of input have to be analysed).
    I was still sticking to the European part of Aviation (Airbus for example).I know that ADA is popular in America, but I am not aware of any ADA projects in Europe.

    Airbus use Ada a lot, both in the manufacturing process software and in embedded systems (such as flight control).
  15. MS stupid?[ Go to top ]

    Airbus use Ada a lot, both in the manufacturing process software and in embedded systems (such as flight control).

    And if you want a military example, vast parts of the Eurofighter software are in Ada.
  16. JAXP needs to be faster[ Go to top ]

    I am still waiting to see a major .NET application that Microsoft has released. VS.NET is not .NET, office is not .NET...can someone tell me where these applications are? If .NET is so great, why is Microsoft not developing software with it, why, why, why?

    All these softwares mentioned don't need to be in .NET. Biztalk Server 2004 is written completely in C# because it's webservice oriented. If Microsoft writes VS.NET and Office in .NET, it would be a lot slower, possibily as slow as openoffice is.
  17. JAXP needs to be faster[ Go to top ]

    All these softwares mentioned don't need to be in .NET. Biztalk Server 2004 is written completely in C# because it's webservice oriented. If Microsoft writes VS.NET and Office in .NET, it would be a lot slower, possibily as slow as openoffice is.

    I don't get it. If all .NET code is compiled to the native language of the platform, why is it slower??? Is it because of garbage collection?

    C# is fast for Biztalk but not for desktop apps? That sounds a little bit strange since I would presume that a Biztalk server does more work than any word processor, period. I thought there was a great promise for C# apps on the desktop and that it would beat Java hands down...yet, we do not see these applications.

    The botton line is, if 'applications do not need to be in .NET', who needs .NET at all?

    Something ain't right.
  18. Something ain't right[ Go to top ]

    "I don't get it. If all .NET code is compiled to the native language of the platform, why is it slower???"

    Because of the time it takes for the code to be JITed it will always be slow. Also both Java and .NET loads a lot of unnecessary library code.

    "The botton line is, if 'applications do not need to be in .NET', who needs .NET at all?"

    IMO both Java and .NET was a big mistake. What was needed was a simpler C++ with garbage collection and linker. That exists now in the form of Digital Mars D Language. Just recompile for another platform. That is how I wanted it at least, can only speak for myself.

    Regards
    Rolf Tollerud
  19. Something ain't right[ Go to top ]

    "I don't get it. If all .NET code is compiled to the native language of the platform, why is it slower???" Because of the time it takes for the code to be JITed it will always be slow.

    Wrong. In modern VMs (and .NET) the JIT-ing is very fast and has little impact on performance. Then optimisation can be done as a background thread, again with little impact.
    Also both Java and .NET loads a lot of unnecessary library code.

    Wrong. If you put in a trace you will see that only the required library code is loaded on demand. No unnecessary code is loaded.
    What was needed was a simpler C++ with garbage collection and linker. That exists now in the form of Digital Mars D Language. Just recompile for another platform. That is how I wanted it at least, can only speak for myself.RegardsRolf Tollerud

    Fair enough, but the work involved in maintaining code and ensuring it runs and is optimised on all the platforms that your potential customers may require is considerable. This is especially the case with D, where operating-system specific code is required (e.g. '#include <windows.h>'). Some of us have better things to do.
  20. Something ain't right[ Go to top ]

    Also both Java and .NET loads a lot of unnecessary library code.
    Wrong. If you put in a trace you will see that only the required library code is loaded on demand. No unnecessary code is loaded.
    Steve
    I mentioned this another thread on TSS. You are wasting your valuable time refuting Rolf's ramblings. If he doesn't know that an assembly for a Type is never loaded unless the place where it is referenced needs to be executed, you know what kind of flag-bearers we have in the .NET world.
  21. Something ain't right[ Go to top ]

    You are wasting your valuable time refuting Rolf's ramblings.

    I apologise. It is self-indulgent.
    If he doesn't know that an assembly for a Type is never loaded unless the place where it is referenced needs to be executed, you know what kind of flag-bearers we have in the .NET world.

    They may be ramblings but they are also widely held beliefs.
  22. pretend I am 7 year old[ Go to top ]

    Well Steve, in the .NET world, if I need a method in a library DLL the whole DLL is loaded. So how is it in the Java world? If I need one(1) method in a 500 Kb jar file how to I get to it? ;)

    Can we not agree to stop these silly personal attacks?
  23. pretend I am 7 year old[ Go to top ]

    Well Steve, in the .NET world, if I need a method in a library DLL the whole DLL is loaded. So how is it in the Java world? If I need one(1) method in a 500 Kb jar file how to I get to it? ;) Can we not agree to stop these silly personal attacks?

    The Jar file is opened and the required class (and only the required class) is extracted. No other code is loaded, or JITed. To check this, try 'java -verbose:class'. Are you implying that if only one method in a .NET DLL is required, .NET loads and JITs all the code in the DLL? If so, I would be surprised, as this would be highly inefficient, and far inferior to Java. I don't have time to research this. I'd be interested in more information.
  24. pretend I am 7 year old[ Go to top ]

    Well Steve, in the .NET world, if I need a method in a library DLL the whole DLL is loaded. So how is it in the Java world? If I need one(1) method in a 500 Kb jar file how to I get to it? ;) Can we not agree to stop these silly personal attacks?
    The Jar file is opened and the required class (and only the required class) is extracted. No other code is loaded, or JITed. To check this, try 'java -verbose:class'. Are you implying that if only one method in a .NET DLL is required, .NET loads and JITs all the code in the DLL?
    Of course not! There is a reason why they call it 'JIT'. If it JITs everything in that assembly (besides the method that was referenced by a Type in that Assembly), you'd call it something else, wouldn't you? And thats called NGEN -- a topic that doesn't get that much coverage anyway. Steve, can we put an end to this here, please? Before long one of us may have to start explaining everything from square one.
  25. pretend I am 7 year old[ Go to top ]

    If it JITs everything in that assembly (besides the method that was referenced by a Type in that Assembly), you'd call it something else, wouldn't you?

    No, I wouldn't. If the compilation is of only a section of the entire application is compiled, and at the start of run time, JIT is entirely appropriate.
    Steve, can we put an end to this here, please? Before long one of us may have to start explaining everything from square one.

    I apologise. I was only being curious. I don't use .NET, and I am honestly interested in how technologies that compete with Java work, especially in the context of threads like this one.
  26. you still load the file[ Go to top ]

    "The Jar file is opened and the required class (and only the required class) is extracted"

    To extract the class the file has to be loaded into memory anyhow! So what is the savings? If the jar file has dependencies they probably are loaded too, even if it was not needed for the particular method. :(

    In the net world, as I explained above, all the system libraries that come with .NET are precompiled. So if you need one(1) method the whole DLL is loaded but not JITed, then it can be shared between processes.

    Seems like an improvement IMO but it is nothing like the D language. In D you compile just in the same way as java or .NET, at the command line, for instance,

    dmd.exe tabs.d dfl.lib (include as many libs you want)

    In this case both compiling and linking is done in the same step. The required method is extracted from the library at compile time. After you have one small file without dependecies of any kind that can be installed just by copying it and starts so fast that you wont even have time to release the mouse button.

    Regards
    Rolf Tollerud
  27. you still load the file[ Go to top ]

    "The Jar file is opened and the required class (and only the required class) is extracted"

    To extract the class the file has to be loaded into memory anyhow! So what is the savings?

    This is not generally true; only portions of the file are loaded into memory. A JAR file is just a ZIP file with a different extension; one does not have to load the entire ZIP file into memory to extract one file out of it.

    Peace,

    Cameron Purdy
    Tangosol Coherence: Clustered Shared Memory for Java
  28. you still load the file[ Go to top ]

    "The Jar file is opened and the required class (and only the required class) is extracted"To extract the class the file has to be loaded into memory anyhow! So what is the savings?
    This is not generally true; only portions of the file are loaded into memory. A JAR file is just a ZIP file with a different extension; one does not have to load the entire ZIP file into memory to extract one file out of it.Peace,Cameron PurdyTangosol Coherence: Clustered Shared Memory for Java

    Aren't jars mmap'ed on *nix?
  29. "Aren't jars mmap'ed on *nix?"

    Well it certainly would be interesting to find out exactly how is done, but..

    To find and extract hundreds or thousands of classes and check for dependencies most be an expensive process no matter how you do it. This does not only affect startup time, because all classes may not be needed at once in the lifetime of an application, at any time it may be necessary to load some classes (potentionally in files swapped away). And then they shall be JITed. All this far outweigh any advantage of being able to optimize for the current conditions

    This whole process should be done at compile time IMO. Why is run-time cross-platform capability so important? Why not just recompile? Never has somebody been able to give me a satisfactory explanation.

    Regards
    Rolf Tollerud
  30. P.S.[ Go to top ]

    BTW, the class loader does not read the jar file directly, as Cameron claim.

    "The internal class loader is often referred to as the default class loader or the primordial class loader. Due to some details of the Class class, we often speak of classes that are loaded by the internal class loader as having no class loader at all

    There is a significant change in the use of the primordial class loader between Java 1.1 and 1.2. In 1.1, the primordial class loader was used to load all classes on the CLASSPATH. In 1.2, the primordial class loader is used only to load the Java API class files; the virtual machine constructs an instance of the URLClassLoader class to load the classes from the CLASSPATH."
    URLClassLoader:

    Load classes from a set of URLs. A URL in this set may be a directory-based URL, in which case the class loader will attempt to locate individual class files under that directory. A URL in this set may also be a JAR file, in which case the JAR file will be loaded, and the class loader will attempt to find a class in the JAR file.

    Anatomy of a Class Loader
  31. P.S.[ Go to top ]

    There are a lot of information about this kind of optimizations if you are looking for it
    http://java.sun.com/j2se/1.5.0/docs/guide/vm/class-data-sharing.html

    You can try thirdparty products too
    http://www.excelsior-usa.com/doc/jetl010.html, JAVA is supported by many vendors and there are many products to solve specific performance problems including your native compilition problem.
  32. P.S.[ Go to top ]

    BTW, the class loader does not read the jar file directly, as Cameron claim.

    Rolf, if you have some problem that results in you being a very unstructured thinker, then please let us know. I know people like this who are pretty intelligent so it's worth my time to help them sort out their thoughts so that we can have a productive discourse. If they hadn't spoken up I'd have written them off as a troll, ignored them and would has missed out on some very interesting ideas. Some of your postings are either very unstructured or you are trolling. Would you care to comment or be ignored?

    Kirk
  33. "Rolf, if you have some problem that results in you being a very unstructured thinker"

    Kirk, nowhere in the world is there so many unstructured thinkers as in TSS.

    Also:

    .NET does not need to catch-up, it is Java that need to catch-up. If you think that Java 25% gain in performance thanks to hotspot is any advantage think again. Hotspot is only one more ridiculous "patch" in the unreasonable over-architectured and over-complicated hopeless mess that are called J2EE that has constantly lost all real-life studies and benchmark as the one we are discussing. And uptime and stability! Don’t make me laugh. Shall I tell you of all the problems I had only by trying to read and respond to this thread?
  34. please give it a merciful bullet[ Go to top ]

    Hotspot is only one more ridiculous "patch" in the unreasonable over-architectured and over-complicated hopeless mess that are called J2EE that has constantly lost all real-life studies and benchmark as the one we are discussing.

    A very strange comment considering I have already shown you that Hotspot is nothing to do with J2EE. It is an intrinsic part of VMs everywhere, including on mobile devices.
  35. There is nothing to discuss about, just reed JVM specification, it is public http://java.sun.com/docs/books/vmspec/2nd-edition/html/ConstantPool.doc.html
    and it works as documented.
  36. All this far outweigh any advantage of being able to optimize for the current conditions

    You might think that it does, but the situations you describe are very rare. Most of the time the requires code is kept in memory, and kept as JITted, so it doesn't outweigh the advantage.

    Why is run-time cross-platform capability so important? Why not just recompile? Never has somebody been able to give me a satisfactory explanation.RegardsRolf Tollerud

    Here it is.

    There are a wide (and increasing) range of platforms that Java runs on. It is both practically and commercially unwise (especially the latter) to provide everyone who needs your application with the source code. The alternative would be to maintain in-house pre-compiled versions of your product for every platform that the application has to run on. This is a maintenance and management nightmare, especially for the smaller developer (too see how hard it is, look at how long Debian Linux takes to release new versions - they provide binaries for almost everything).

    The easier way is to provide the equivalent of 'p-code' - an intermediate binary - and let the JRE on the deployment machine handle the optimisation. This saves the developer and the customer a huge amount of time and effort. Unlike with languages such as D, they don't have to worry about the specific details of each platform. Even better - and this is the aspect of this that I find awesome - your application will run on current and future platforms that you may not even have heard of! All that is required is for someone to have provided a compatible JRE for that platform.

    Another point is that having run-time optimisation rather than pre-compilation also allows your code to be optimised on processors you don't know about! The potential of VMs to optimise for future complex multi-core processors is, I would suggest, going to give VM-based systems a significant advantage.

    Finally, there are security aspects. If you provide arbitrary binaries for a range of different processors it is hard for the customer to be sure of that binary being safe. Having byte code class files which are loaded at run time allows both byte code validation and allows a security manager to potentially check that this is not a rogue application (by intent or accident).

    That is the explanation.
  37. if you had a product[ Go to top ]

    Sorry Steve but your explanation is unconvincing. It is enough for any product to be precompiled for Windows and Linux. The other 35 OS you could provide as a special service that the customer has to pay for. That is sound commercial practice.

    Consider what you get,

    1) Fantastic speed with no upstart time
    2) One small file
    3) No dependencies

    I know that you can compile both Java and .NET applications but you don't gain any speed and the "Hello World" will be at least 10 MB or more. Think instead of the commercial advantage of having a product in one file a few hundred KB without any dependencies of any kind and impossible to decompile.

    Or let me try it in this way. Do you have fantasy? Imagine yourself in the position of a software salesperson. You are sitting with your future presumptive customer and the Java and .NET guys have just left. Can you imagine an easier sell?

    Regards
    Rolf Tollerud
  38. if you had a product[ Go to top ]

    Sorry Steve but your explanation is unconvincing. It is enough for any product to be precompiled for Windows and Linux.
    If one has a simple understanding of what an "application" is then I can undertand why they would think/believe that.
    That is sound commercial practice.
    Creating alot of extra work for yourself and your company when there is no need to? Working smarter, not harder, is a sound business (commercial) practice.
  39. if you had a product[ Go to top ]

    Sorry Steve but your explanation is unconvincing. It is enough for any product to be precompiled for Windows and Linux. The other 35 OS you could provide as a special service that the customer has to pay for. That is sound commercial practice.

    No it isn't enough. What versions of Windows? 32-bit? 64-bit? What about Windows on mobile devices? Optimised for multicore? What versions of Linux? Linux on what platforms? PPC? Sparc? AMD 64-bit? There is no such thing as being 'precompiled for Linux' as there is no single binary platform called Linux. Even if you compile for Mac, they are about to switch to Intel. Then you have to support two versions for Mac. It is maintenance madness.

    It is not sound commercial practice to charge as a special service what others are offering for free by using modern technologies like Java and the CLR. For example a company is going to look very silly indeed if they only offer a product for one model of mobile phone, while other companies are offering competitive products that run on any Java-enabled phone.
  40. I am waiting for "I am sorry"[ Go to top ]

    "What versions of Windows? 32-bit? 64-bit?..What versions of Linux? etc"

    You know very well that I meant Windows 32-bit and Linux/Intel. The others have to pay. But you are entitled to another opinion of course. See you in the competition for the customers!

    BTW, I can not see that I got your excuse for "the loading of the jar file before extracting the class" question?

    Steve,
    I apologise. It is self-indulgent(!)

    I never got Cameron’s excuse either..

    Cameron,
    only portions of the (jar) file are loaded into memory (!)

    Of course if I have been Steve, Cameron or Dilip Ranganathan I would have added some snide remarks about how you need to get a Java tutorial primer, how you should go back to children school or worse. But I resist! You see? I am holier than you! ;)

    Regards
    Rolf Tollerud
    (I do know that Hotspot exist in Java, not only J2EE. Thank you for pointing it out! But adding it to all the other mess in J2EE doesn't help things does it?)
  41. I am waiting for "I am sorry"[ Go to top ]

    BTW, I can not see that I got your excuse for "the loading of the jar file before extracting the class" question?

    Final comment, to prevent irritating people too much. The Jar file is not loaded if it is available on the local filesystem as the Java application. It is 'opened', and the appropriate entries are read. As with any random-access technique, this obviously does not require loading of the entire file. If the jar file is only accessible over a network, then the entire file is usually fetched so that the required entry can be accessed. This is why, with distributed applications (such as applets) some effort to optimise the organisation of code into Jars so that only the required Jars need be fetched. Such effort is obviously not required with Jars on the local filesystem, which is usually the case for server-side applications.
  42. hmm, how is it done?[ Go to top ]

    Forgive my persistence, my dear old fellow, but a jar file is just a ordinary zip file isn't it? It is not an ordinary indexed binary file or something.. So how does it find the (compressed) file? By reading sequentially through file from the beginning until it finds it?
  43. P.S.[ Go to top ]

    I am beginning to understand why the big J2EE servers take several minutes to start!
  44. startup time...[ Go to top ]

    My tomcat 5.5 starts in 4-5 seconds on 30$ cpu (amd 2000).
    And don't tell me that tomcat isn't a (j2ee) server coz somehow Microsoft claims IIS to be one.

    Since when zip files support "solid" compression mode (like rar or 7z)? Files are stored separately and you do not need to decompress all the archive to just extract one file.

    Regards.
  45. My tomcat 5.5 starts in 4-5 seconds on 30$ cpu (amd 2000).And don't tell me that tomcat isn't a (j2ee) server coz somehow Microsoft claims IIS to be one.Since when zip files support "solid" compression mode (like rar or 7z)? Files are stored separately and you do not need to decompress all the archive to just extract one file.Regards.

    it's so hard to pass a filename to jar -x. It should know what class files I need by reading my brain waves.
    </sarcasm>

    I'm all for being efficient and loading just the classes you need and only importing the needed classes, but blaming jar files isn't really a valid argument to me.

    peter
  46. hmm, how is it done?[ Go to top ]

    I'm not as technically minded as the average D programmer on here probably but I hope this helps. Zip and Jar files have a footer giving the offset of each file in the archive.
  47. hmm, how is it done?[ Go to top ]

    Very well, produce some evidence that it is done in this way. IMO it will take even more time than loading the file into memory.
  48. hmm, how is it done?[ Go to top ]

    dUde... you must be out of your mind to come out here (all trolling and gay) asking basic/trivial questions to the whole community... like beginner level stuff... completely ignoring the fact that the very reason .NET was created was because VB wasn't good enough against a more sophisticated technology platform like Java.

    MS had to preserve its market share and it has done a good job out of it so far with .NET ... but lets not forget that .NET has come at this level of maturity (OO and ASP.NET web controls and now web-services and what not), by taking lessons that the community (software industry at large) has learned in the last 10 years on Java's shoulders... Java has paved the way for .NET... just like C and C++ paved the way for Java (not VB, nope). Where is the gratitude ma...

    Java is evolving (so much for the number 8000) and will continue to do so... I am glad at least the community has a voice on this side of the world!

    RTFM, exactly.
  49. hmm, how is it done?[ Go to top ]

    Very well, produce some evidence that it is done in this way. IMO it will take even more time than loading the file into memory.

    Where did you go, Rolf... did you run away? Oh c'mon now :p
  50. hmm, how is it done?[ Go to top ]

    Man... I just can't stop kicking his ass, can I :p

    Rolf is _not_ a healthy argument maker (read, sensible debater) ... not here ... not on the www.TheServerSide.NET ... not anywhere... :p

    Dog eat dog et all...
    http://www.theserverside.net/tss?service=direct/0/DiscussionThread/threadViewer.toggleShowNoisy2&sp=l23310&sp=T#107416

    Better yet...
    http://www.theserverside.net/news/thread.tss?thread_id=31208#153177

    Poor guy or not?
  51. hmm, how is it done?[ Go to top ]

    Please stop. Attacking Rolf does no good for anyone, and violates the posting rules for TSS.
  52. hmm, how is it done?[ Go to top ]

    Please stop. Attacking Rolf does no good for anyone, and violates the posting rules for TSS.
    This is just the effect, we all know what the cause is.
  53. hmm, how is it done?[ Go to top ]

    Sure, but we don't need to continue, do we? After all, we're sort of adult people, maybe.
  54. RTFM[ Go to top ]

    http://java.sun.com/j2se/1.4.2/docs/guide/jar/jar.html
  55. ok[ Go to top ]

    let's say you have 200 classes

    According to Peters and Juozas: The loader opens the "jarindex" to find out where the first class is, and then extract the class calling the jar file with the class as parameter. Then it does the same 199 times.

    Would you not load the file into memory first?
  56. one more try[ Go to top ]

    1. Download the Java Tutorial from java.sun.com
    2. Check the difference between extracting 1 file and complete tutorial decompression.
    3. (optional) Read the tutorial :-)

    Kind Regards,

    P.S.
    If you do not believe that class loader can do the same - shoot yourself...
  57. one more try[ Go to top ]

    It is the attitude ("asking basic/trivial questions..download the Java Tutorial") that encourages me to write - I continue until the "jerk" attitudes of the Java developers is gone. It will probably happen when the available Java jobs are 10% of the job market or before.

    aXe,
    "by taking lessons that the community (software industry at large) has learned in the last 10 years on Java's shoulders"

    You forget that Microsoft was the Java industry/development leader when they was sued by Sun. (probably therefore).
  58. one more try[ Go to top ]

    It is the attitude ("asking basic/trivial questions..download the Java Tutorial") that encourages me to write - I continue until the "jerk" attitudes of the Java developers is gone. It will probably happen when the available Java jobs are 10% of the job market or before.aXe,"by taking lessons that the community (software industry at large) has learned in the last 10 years on Java's shoulders"You forget that Microsoft was the Java industry/development leader when they was sued by Sun. (probably therefore).

    yippie yaie yae... (fill in the kicks)!
  59. Microsoft is the java leader!!![ Go to top ]

    Microsoft has never been "the Java industry/development leader". The has been tried to push their own java-based-windows technology using grown java popularity (it even has been proven in court, remember?).

    Asking silly questions you are getting adequate answers...

    I read your posts from the "dark side of the serverside" :-) and they are knowledgeable so this is the war for the newbies isn't it? ;-)

    Best Regards
  60. American court?[ Go to top ]

    "it even has been proven in court, remember"

    US law is a joke. MS had not done anything that that is not done everyday by any company. But what can you expect of a juridical system that gives a man 35 years in prison for stealing a black and white TV? 10 million dollar in damages to lady because she was exposed to perfume? Let OJ Simpson free but condemn Michael Jackson to prison?

    I can only pray that I never will come into contact with American "justice".

    And Microsoft was far far ahead of Sun in Java technology, as far ahead as they are now!

    hi hi (i pronounced like in malicious)

    Regards
    Rolf Tollerud
  61. American court?[ Go to top ]

    "it even has been proven in court, remember"US law is a joke. MS had not done anything that that is not done everyday by any company. But what can you expect of a juridical system that gives a man 35 years in prison for stealing a black and white TV? 10 million dollar in damages to lady because she was exposed to perfume? Let OJ Simpson free but condemn Michael Jackson to prison?I can only pray that I never will come into contact with American "justice".And Microsoft was far far ahead of Sun in Java technology, as far ahead as they are now!hi hi (i pronounced like in malicious)RegardsRolf Tollerud

    Microsoft was far far ahead of Sun in Java technology, as far ahead as they are now...

    Yeah Right. People, MS still has something called J#.NET, by the way... take a guess at why they chose the alphabet 'J'!

    Microsoft was .... in Java technology, ... as they are now...

    Phhbbbt :))

    Let it out, Rolf... let it all out... :p
  62. American court?[ Go to top ]

    "it even has been proven in court, remember"US law is a joke. MS had not done anything that that is not done everyday by any company. But what can you expect of a juridical system that gives a man 35 years in prison for stealing a black and white TV? 10 million dollar in damages to lady because she was exposed to perfume? Let OJ Simpson free but condemn Michael Jackson to prison?I can only pray that I never will come into contact with American "justice".And Microsoft was far far ahead of Sun in Java technology, as far ahead as they are now!hi hi (i pronounced like in malicious)RegardsRolf Tollerud
    Microsoft was far far ahead of Sun in Java technology, as far ahead as they are now...Yeah Right. People, MS still has something called J#.NET, by the way... take a guess at why they chose the alphabet 'J'!Microsoft was .... in Java technology, ... as they are now... Phhbbbt :)) Let it out, Rolf... let it all out... :p

    To answer your JAR/ZIP file question... ZIP files are nothing like DLLs, which are plain old binary file formats.
    Ever heard of random-access vs sequential-access?
    http://java.sun.com/docs/books/tutorial/essential/io/rafs.html

    <quote>
    A ZIP archive contains files and is typically compressed to save space. It also contains a dir-entry at the end that indicates where the various files contained within the ZIP
    archive begin ...

    Suppose that you want to extract a specific file from a ZIP archive. If you use a sequential access stream, you have to:

       1. Open the ZIP archive.
       2. Search through the ZIP archive until you located the file you wanted to extract.
       3. Extract the file.
       4. Close the ZIP archive.

    Using this algorithm, on average, you'd have to read half the ZIP archive before finding the file that you wanted to extract. You can extract the same file from the ZIP archive more efficiently by using the seek feature of a random access file and following these steps:

        * Open the ZIP archive.
        * Seek to the dir-entry and locate the entry for the file you want to extract from the ZIP archive.
        * Seek (backward) within the ZIP archive to the position of the file to extract.
        * Extract the file.
        * Close the ZIP archive.

    This algorithm is more efficient because you read only the dir-entry and the file that you want to extract.
    </quote>

    You have got to agree that 'random' is better, and then JAR is ZIP... which is not a DLL... you get the picture...
  63. American court?[ Go to top ]

    "it even has been proven in court, remember"US law is a joke. MS had not done anything that that is not done everyday by any company. But what can you expect of a juridical system that gives a man 35 years in prison for stealing a black and white TV? 10 million dollar in damages to lady because she was exposed to perfume? Let OJ Simpson free but condemn Michael Jackson to prison?I can only pray that I never will come into contact with American "justice".And Microsoft was far far ahead of Sun in Java technology, as far ahead as they are now!hi hi (i pronounced like in malicious)RegardsRolf Tollerud

    At least the U.S. HAS a justice system. Do they have one in the dimension you are from?

    As for OJ vs Michael - Unfortunately the glove fits Michael. He He (done in high pitched voice).
  64. American court?[ Go to top ]

    "it even has been proven in court, remember" MS had not done anything that that is not done everyday by any company. RegardsRolf Tollerud
    So that makes it right?
  65. one more try[ Go to top ]

    It is the attitude ("asking basic/trivial questions..download the Java Tutorial") that encourages me to write - I continue until the "jerk" attitudes of the Java developers is gone.
    Yeah, I know how boring teachers can get. ;)
  66. offtopic[ Go to top ]

    BTW we went offtopic...

    I have several notes about the test itself...

    1. which VM has been used 32 or 64 bit one???
    (opteron is 64bit isn't it)
    2. the server has 4gb so why -mx1024m?
    3. no -server tag for the websphere?
    4. what about jrockit (64bit)/weblogic?
    5. webservices is not the best thing since the russian vodka ;-) Let's check soap vs rmi/corba!!!
    6. I remember Microsoft said net 1.1 was 27 times faster than java... In this test it's slower... new yogurt is better than old one! (actually the old one has been toxic! ;-)

    Sincerely and so on.
  67. Re: offtopic[ Go to top ]

    BTW we went offtopic...I have several notes about the test itself...1. which VM has been used 32 or 64 bit one??? (opteron is 64bit isn't it)2. the server has 4gb so why -mx1024m?3. no -server tag for the websphere?4. what about jrockit (64bit)/weblogic?5. webservices is not the best thing since the russian vodka ;-) Let's check soap vs rmi/corba!!!6. I remember Microsoft said net 1.1 was 27 times faster than java... In this test it's slower... new yogurt is better than old one! (actually the old one has been toxic! ;-)Sincerely and so on.


    FYI, this was run on 32-bit Windows so the Opteron machine is running in 32-bit mode. WebSphere 64 for Windows will be out in a month or so I think, so it could then be run on 64-bit Windows. Or, the code can be downloaded and run on WebSphere 64-bit/Linux now. What I have found (and various IBM perf papers basically agree), is that 64-bit will be 10-15% slower than 32-bit (whether Linux or Windows Server 64) for a test like this where memory is not an issue (remember, the WS Test web services are completely stateless, and no ejbs are involved at all, and there is no memory buildup of objects). So garbage collections were not an issue in this test either for .NET or Java, since essentially nothing gets accumulated for a full sweep gc (in .NET a gen 2 GC). When running the benchmark, you will notice that running with -xms/xmx = 512 achieves essentially the same results as -xms/xmx 1024, 1024 was chosen simply becuase that's the heap size Sun used in their original testing. So using a heap size larger than needed (e.g., the machine had 4GB available) is just a waste and means slower full sweep GCs for this test. Websphere is also by default now doing parallel model for GCs. By default their Websphere server JVM is -server mode. You can't change this.

    As for SOAP with xml/text serialization, we have gotten to the point with .NET now (with 2.0) that in many cases, doing .NET Remoting with Binary Serialization is not necessarily faster than building a straight ASMX .NET web service (which provides full interop with J2EE). So it does not *have* to be the case that XML and/or SOAP is slow. Sometimes people just assume this to be the case, but if you actually test it, at least with .NET 2.0, performance can be great, and this preserves the interop with J2EE vs. binary serialization. Would indeed be an interesting case to see how RMI compares to Java/SOAP for various app servers. I expect the difference will be much greater until the java serialization engines improve (that seems to be the bulk of the difference in this benchmark wrt .NET vs. JWSDP/WebSPhere). Bea was not tested, perhaps this would offer better perf than either JWSDP 1.5/Sun http server or Websphere 6.0/ibm http server. It would have to be tested, of course, to find out.

    One other note, is that according to IBM, 64-bit can make a tremendous difference when using encryption (e.g. WS-Security). So look in the future for an extension to the WS Test Benchmark with WS-security, to see 32-bit Java vs. 32-bit .NET vs. 64-bit Java vs. 64-bit .NET (.NET 2.0 is available for both 32-bit Windows and 64-bit Windows). It will be interesting to see for a non-memory intensive test like this how 64-bit platforms may boost performance, my guess is the encryption test case might be one key area. Other than that, based on testing I have done, unless its a memory intensive middle tier (say >4GB usage of RAM for a middle-tier cache, for example), I would not expect 64-bit to perform any better, in fact as IBM's perf papers show, it may actually be 10-15% slower for middle tier apps(save for >4GB memory usage or possibly encryption case).

    -Greg
  68. Re: offtopic[ Go to top ]

    Well, the XML libs would allocate RAM, and depending on which ones were used, a LOT of RAM. So GC would in fact be a huge factor.

    Further, it's fairly easy to change the VM for Websphere, unless IBM has done something really stupid. I haven't used WAS 6, but in WAS 5, 5.1, and 4, changing the VM was fairly easy (and usually pretty impactful.)

    I found the test to be pretty relevant, although I don't think it was relevant in the same ways that most people seem to. :)
  69. re: offtopic[ Go to top ]

    Well, the XML libs would allocate RAM, and depending on which ones were used, a LOT of RAM. So GC would in fact be a huge factor.Further, it's fairly easy to change the VM for Websphere, unless IBM has done something really stupid. I haven't used WAS 6, but in WAS 5, 5.1, and 4, changing the VM was fairly easy (and usually pretty impactful.)I found the test to be pretty relevant, although I don't think it was relevant in the same ways that most people seem to. :)

    The XML Mark tests were only run against Sun's JVM 1.5, not against IBM's JVM 1.4. In this case, we tuned exactly as Sun tuned, applying precisely the tuning they did (running in -Server mode, heap size, access logging off, etc.)

    I'll repeat that for these tests (WSTest and XML Mark) I don't think any amount of GC tuning is going to make any difference beyond what was already done. Again, make a specific suggestion and I can run the tests and let you know, or that can be done by others with the code published (XML Mark is particularly easy to run since it can be run on a single machine with no third party test software needed).

    As for changing the IBM JVM parameters, just tell me explicitly what parameters you want me to try, I will even try to do it over the weekend if you are specific. IBM has a couple of good redbooks on web service perf tuning and websphere 6.0 general tuning, both of which I am quite familiar with. But again I'll make the offer, whatever JVM startup params for websphere you want me to try, I will.

    -Greg
  70. re: offtopic[ Go to top ]

    The XML Mark tests were only run against Sun's JVM 1.5, not against IBM's JVM 1.4. In this case, we tuned exactly as Sun tuned, applying precisely the tuning they did (running in -Server mode, heap size, access logging off,

    Thanks for these further details.

    My only criticisum of this approach is that you always need to verify that previous technical tuning parameters used in previous versions of the JVM still hold in the version that you're using. As I've stated before, I'm not convinced that these settings are optimial. Not that this makes a difference. My primary concern is the focus of the test (peak rates vs. sustained rates) and the way the numbers have been reported (a complete lack of statistics).

    Reporting a single value is akin to randomly picking someone on the street and offering them up as a person of average height. If you did happen to randomly pick 1023 people and take an average, that value says very little about the population from which you sampled. Further more, you cannot make a valid comparision of averages obtained from different sample sets (from different populations).

    The question you are asking, are these populations significantly (or statistically) different? Tests (such as Student-T tests) as designed to make this determination. Eye-balling it (like how one if forced to do so in this report) leads one to the popular phrases, there are lies, damm lies, then statistics. The truth in this statement is that the mis-application of statistics will hurt you!

    For example (this is from memory as I'm currently not geo-located with the report that I wrote), I did a study for some people who were terminated from their Jobs. What was contended was that there was an age bias (or that age discrimination was present) in the decision of who got to stay and who was given the boot. Now the average age of those who were let go was 35.2 (if I remember correctly) and the average of those still employeed was 34.8. So, the difference in age between the two groups was a little lesss more then half a year. So one might say that there is no age bias in the two groups. How ever, wen I applied a T test to the raw data, I found that there was a 1.7% chance that there was *NO* age bias in the data!

    Now you may have bet your business just based on the first set of numbers but would you have done so on the second?

    Maybe another study is in order?

    Kirk
  71. Re: offtopic[ Go to top ]

    Greg, three questions, if you do not mind:

    1. Have you found other general .net applications to also run so much faster in the new version of .net?

    2. Did you change any of the code to make it run so much faster?

    3. There are some people in this forum who could probably make the Java version much much faster than it currently is. Kirk Pepperdine, Cameron Purdy, to just mention two. What are the chances of getting Microsoft to sponsor such a tuning exercise?

    Kind regards

    Heinz
    --
    http://www.javaspecialists.co.za
  72. re: offtopic[ Go to top ]

    Greg, three questions, if you do not mind:1. Have you found other general .net applications to also run so much faster in the new version of .net?2. Did you change any of the code to make it run so much faster?3. There are some people in this forum who could probably make the Java version much much faster than it currently is. Kirk Pepperdine, Cameron Purdy, to just mention two. What are the chances of getting Microsoft to sponsor such a tuning exercise?Kind regardsHeinz--http://www.javaspecialists.co.za

    Heinz,

    No problem:

    1. Depends on the case. There are new features in .NET 2.0 that can make a tremendous difference in perf. For example, ADO.NET 2.0 is much better than ADO.NET 1.1 with very large datasets; also it supports statement batching (something jdbc has had for awhile). The new System.Transactions is a way to do distributed transactions without inheriting from ServicedComponent (meaning you do not have to go through a COM+ interface anymore, which is faster). Also, there are some nifty caching features when used with SQL Server 2005; basically something called querynotification where the middletier cache never is stale, when a row is updated in the DB (by any app!), the cache is automatically invalidated (vs. having to do programmatically or time-based). Some benefits come from explicitly targeting these new features; some come just with a recompile. You can't make a blanket statement .NET 2.0 is always faster than .NET 1.1 (especially since its still in final beta). But these, along with hugely improved XML parse perf and better XML perf, are some areas worth mentioning.

    2. In terms of WSTest, we are testing the same code that was tested in the orginal tests, no changes. Its very, very simple code by the way, especially the echo tests Sun created---just a parameter in and back out (deserialized and then re-serialized). There is not much to change there. In the case of XML Mark, as noted in the paper two changes were made to Sun's code. The first corrected a bug had in their code for java, where it used the wrong value to determine whether to do serialization, and hence Java never serialized even when the user set the run properties to do. So that was fixed. Also, in .NET, we also found that Sun was using the GetElementsByTagName method in C#, and this results in a mismatch of the C# and Java versions. Although both harnesses use XmlElement.GetElemensByTagName(), the C# implementation of this method keeps track of a live list of nodes which is negatively impacted by some of the DOM test scenarios that have edits. A better method for element selection in the C# harness is XmlElement.SelectNodes() which matches the Java harness in functionality. The code published allows you to compile conditionally to use either method. You will see that .NET 2.0 handily beats Sun JVM 1.5 in either case, but by a wider margin with this correction. Its an easy test to run on your own. No other changes were made from Sun's implementation.

    3. I assume you are talking about WSTest. That's great....I would certainly take them up on this offer. I do not think they will get it to run any faster using JWSDP 1.5/Sun HTTP server or IBM Websphere 6.0/IBM Http server as done in these tests. BEA may well offer different results, I have no idea. I will happily test any reasonable suggestions made in terms of tuning parameters. Basically, tuning was carefully done for all products, lots of combinations tried, and in the case of both tests, other than obvious stuff like turning off logging, session state, heap size and a couple of thread settings, there is not much else to do. But just make real suggestions, I appreciate these vs. blanket dismissals offered by Cameron and a few others of the results (blind flying blind without ever having performed their own tests). I am pretty confident in the results, but publishing code and tuning and test harness is the only way to have transparency to see if different settings might make a difference. And of course the code is published so anyone can test on their own and offer results. If customers want to hunt around for different JVM and/or proprietary non-interoperable SOAP (non text/xml) mechanisms, that's their perogative. Most at some point, though, don't like to hunt for a zillion different solutions to a specific perf problem and find one app server or JVM that's great for one scenario, only to find it is not so great for another. I guess the point is I believe from a customer standpoint it makes more sense to test mainstream things, with the JVMs from the major vendors (Sun, IBM, BEA for sure).

    -Greg
  73. re: offtopic[ Go to top ]

    Greg,

    Thanks for your detailed response.

    Let's hope TSS can size up to the challenge :)

    Do you have the raw data of the performance results, or maybe just the standard deviation?

    Kind regards

    Heinz
    --
    http://www.javaspecialists.co.za
  74. thanx![ Go to top ]

    This what I call the good post!

    The problem is that we have only one NET and sooo many JVM/Vendors that testing all possible combinations is virtually imposible...

    Sure there is one which beats .Net ;-)

    Peace (c)CP :-)
  75. KARMI[ Go to top ]

    GREG - "I expect the difference will be much greater until the java serialization engines improve (that seems to be the bulk of the difference in this benchmark wrt .NET vs. JWSDP/WebSPhere)"



    It has been improved for some time, details about the optimized RMI can be googled(+karmi +rmi). It's generous on your part, not to benchmark against high performance java components. Then again, what would Microsoft know about high performance.

    One question, still herding the sheeples at Microsoft to rigging polls online? Move on people, don't feed the vultures. Your liable to get your balls picked off.


    P.S. Tell your engineers to stop cold calling google for career opportunities.


    nn
  76. soap, not rmi[ Go to top ]

    GREG - "I expect the difference will be much greater until the java serialization engines improve (that seems to be the bulk of the difference in this benchmark wrt .NET vs. JWSDP/WebSPhere)"It has been improved for some time, details about the optimized RMI can be googled(+karmi +rmi). It's generous on your part, not to benchmark against high performance java components. Then again, what would Microsoft know about high performance.One question, still herding the sheeples at Microsoft to rigging polls online? Move on people, don't feed the vultures. Your liable to get your balls picked off.P.S. Tell your engineers to stop cold calling google for career opportunities.nn

    uh, this is not an RMI benchmark. It's a soap benchmark, so we are talking about SOAP serialization engines to/from text/xml here, which seems to have nothing to do with what you posted. What was benchmarked, by the way, was IBM WebSphere 6.0 with IBM's http server; and Sun JWSDP 1.5 with Sun HTTP Server. This is the mainstream stuff. Nothing was chosen in order to be 'slow'; I don't think Sun or IBM would particularly appreciate you calling their products 'non-high performance.' I never said that, the numbers are simply the numbers for these products. If you want to do your own testing, that's great, its why we (and Sun) published the code.

    -Greg

    -Greg
  77. soap, not rmi[ Go to top ]

    Greg,

    So what are you *really* saying?

    We should just dump java and go .net?

    What do *you* want us to do?

    Just be honest and quit hiding behind these 'benchmarks'.

    It must be nice to work for M$... you get to sit there all day on message boards and get paid to poo poo others products...kind of pathetic really.

    I work around people that work for M$. They are MASTERFUL at the art of poo pooing anything that is not from M$. You are obviously trained in the M$ Jedi art as well.

    I don't care if .NET is 100 times faster, I'm not using it.

    With Java I have freedom, choice. With .NET you have complete lock in and high prices. I don't care if Java is a few percent slower at some dumb web services benchmarks (I don't even use web services so I don't care anyway).

    I'll take freedom and choice over M$ any day.

    Mike
  78. ok[ Go to top ]

    let's say you have 200 classesAccording to Peters and Juozas: The loader opens the "jarindex" to find out where the first class is, and then extract the class calling the jar file with the class as parameter. Then it does the same 199 times. Would you not load the file into memory first?

    I think you're misquoting here. I suggested using something like ANT to launch a separate process to extract the desired classes into a working directory. Though you might as well extract the whole thing. by launching a separate process from ANT or the tool of your choice, it's loaded into a separate VM instance. Once the files are extracted, add the path to your URLClassloader. It's really not that hard. Adding the path should tell the classloader where to search for the class. The discussion has totally strayed off topic, but I'll attempt to keep it on topic.

    Say i want to build a dynamic webservice that needs to compile the schema the first time the server gets the request and cache it. this is one of the more complex personalization scenarios from my wireless dev days. The model would extend a base schema, which the service provides, but end-users can define new attributes or aggregate existing attributes. when the user logs on, the server would need to compile the extensions and make the classes available. In this type of scenario, does .NET webservices provide support?

    AFAIK, .NET 1.1 and 2.0 does not provide this kind of funcationality in XSD or XSDObjectGen. JWSDP on the otherhand does provide the ability to generate classes that extend some other class. So do other java schema compilers. Support for this type of dynamic code generation varies between webservice toolkits. One of the reasons I wrote my C# schema compiler is so that I could support these kinds of scenarios. I know of one user of Dingo that made the choice because .NET XSD doesn't handle it. Assuming one uses the fastest available XML parser available for the platform, i would say the deciding factor isn't performance because any difference won't be significant. At the point, it is the support and flexibility of the toolkit that matters.

    If .NET provided the features I needed to build dynamic webservices, i wouldn't have written dingo.

    peter
  79. I am waiting for "I am sorry"[ Go to top ]

    I never got Cameron's excuse either..

    Cameron,
    only portions of the (jar) file are loaded into memory (!)

    and
    Forgive my persistence, my dear old fellow, but a jar file is just a ordinary zip file isn't it? It is not an ordinary indexed binary file or something.. So how does it find the (compressed) file? By reading sequentially through file from the beginning until it finds it?

    .. then someone explained it, only to be answered with ..
    IMO it will take even more time than loading the file into memory.

    .. then someone pointed to the doc, so Rolf completely reverses himself ..
    let's say you have 200 classes

    According to Peters and Juozas: The loader opens the "jarindex" to find out where the first class is, and then extract the class calling the jar file with the class as parameter. Then it does the same 199 times.

    Would you not load the file into memory first?

    .. and when someone gets irritated by the constant stream of technical ignorance mixed with brash insults ..
    It is the attitude ("asking basic/trivial questions..download the Java Tutorial") that encourages me to write - I continue until the "jerk" attitudes of the Java developers is gone.

    Recipe:

    1) Rolf makes a stupid off-topic claim about some trivial and simple technology that he knows nothing about
    2) People explain why it is a stupid claim
    3) Rolf makes fun of the explanations because he cannot understand the technology
    4) People point him to the documentation
    5) Rolf suddenly reverses his argument, hoping that no one notices
    6) People call him on it
    7) Rolf explains that he is only doing it to teach people lessons because they're all such jerks
    8) GOTO 1

    Peace,

    Cameron Purdy
    Tangosol Coherence: Clustered Shared Memory for Java
  80. how hard is it?[ Go to top ]

    To extract the class the file has to be loaded into memory anyhow! So what is the savings? If the jar file has dependencies they probably are loaded too, even if it was not needed for the particular method. :(
    In the net world, as I explained above, all the system libraries that come with .NET are precompiled. So if you need one(1) method the whole DLL is loaded but not JITed, then it can be shared between processes.Seems like an improvement IMO but it is nothing like the D language. In D you compile just in the same way as java or .NET, at the command line, for instance,dmd.exe tabs.d dfl.lib (include as many libs you want)In this case both compiling and linking is done in the same step. The required method is extracted from the library at compile time. After you have one small file without dependecies of any kind that can be installed just by copying it and starts so fast that you wont even have time to release the mouse button.

    Regards
    Rolf Tollerud

    if you really only want one class, why not use ANT or something else to extract just the class to a working directory and then add the path to the desired classloader? One need not load the whole entire jar into the same VM to extract 1 class. If the class has dependencies, of course it's better to load the whole jar assuming it's packaged correctly.

    As far as I know, I can't extract a class from a dll compiled with VS C# .NET. I could be wrong, since I haven't needed to do that. If separating components and small binaries is important, wouldn't it be better to package the interfaces in a stand alone package and then all other components use the common dll. Unless someone is insane and makes a dll's or jar that is 100 Mb, it's generally not a problem for server side stuff. For client side, it definitely makes sense to limit memory consumption. It was especially true of the early smart phones that only had 256Kb of Ram available for all the apps. That's also why early smart phones used WAP, instead of HTML or webservices.

    As mobile devices get more powerful, i suspect more people will be lazy and use webservices on small devices. In some ways that's good, because that means less work writing embedded clients in assembly. But it's bad since connectivity in the US is bad and sending bloated SOAP messages is a good way to make sure the service is slow and unreliable.


    peter
  81. Something ain't right[ Go to top ]

    Fair enough, but the work involved in maintaining code and ensuring it runs and is optimised on all the platforms that your potential customers may require is considerable. This is especially the case with D, where operating-system specific code is required (e.g. '#include <windows.h>'). Some of us have better things to do.

    Now I clearly understand in a better fashion why you staunchly defend C# and the .NET environment ... for platform independance :o)

    Isn't this a pointless debat. Nearly all desktops are windows now, so who cares?

    The bottom line is that the .NET platform is a very productive platform for building applications, in particular GUI stuff. Performance is just a secondary debat, who cares if my GUI can handle 10 000 requests/second, I can only generate one a minute anyway ...

    a++ Cedric
  82. Something ain't right[ Go to top ]

    Fair enough, but the work involved in maintaining code and ensuring it runs and is optimised on all the platforms that your potential customers may require is considerable. This is especially the case with D, where operating-system specific code is required (e.g. '#include <windows.h>'). Some of us have better things to do.
    Now I clearly understand in a better fashion why you staunchly defend C# and the .NET environment ... for platform independance :o)

    Me? When have I defended C# and .NET?
    Isn't this a pointless debat. Nearly all desktops are windows now, so who cares?

    Me. I work in several organisation that have replaced some or all of their Windows desktops with Linux, which has resulted in a huge saving of purchase, license and support costs.

    You may also be interested in a recent finding that suggests that the Macintosh install base is above 10%...
  83. Future of JIT[ Go to top ]

    Rolf,

    I agree with you that native compiled code is faster then .NET and Java - today. In the last years I've been involved in a lot of performance tuning projects and what I see is that the gap between Java and native code is getting smaller and smaller.
    I think that in the future JIT code can even be faster than native compiled code - why? because it can improve the code after many executions to the platform (CPU, graphic card, etc.) and maybe even to business rules or inject caches to the code where it is suitable.

    Todays Java 5 VM architectures are getting better and better managing performance and Garbage collection and where with native C code you have to think about the performance of memory allocation etc. with Java and .NET you can concentrate on business matters - that worth the maybe 20% overhead.

    I remember the time where IBM switched to CMOS technology with there Mainframe and Hitachi etc. didn't. CMOS was slower in the first years but today where is Hitachi etc.?

    - Mirko -
    codecentric - the Java performance tuning company
  84. stand back in line please[ Go to top ]

    I don't need any benchmark to notice the responsiveness of a windows GUI application. My customers (unfortunately) have no problem with that either.

    There is a big vast perceived quality difference between a C# and a C++ app. And a Java does not even get a honorable mention. So if you want to sell a product - perhaps you have heard the word competition?

    When it comes to help people doing their work the competition is cut-throat. Time is money.
    Todays Java 5 VM architectures are getting better and better managing performance and Garbage collection

    The most difficult competition to Java/.NET in the future will be languages like Digital Mars D that also has garbage collection but not the other drawbacks of managed code.

    Regards
    Rolf Tollerud
  85. stand back in line please[ Go to top ]

    The most difficult competition to Java/.NET in the future will be languages like Digital Mars D that also has garbage collection but not the other drawbacks of managed code.RegardsRolf Tollerud

    I would be very surprised if D doesn't also have the same issues as any other memory managed VM. If your code doesn't do the object destruction itself then it has to delegate the clean up at some determined interval to the garbage collector.

    If you are dealing with small memory structures in your D implementations, then you are probably getting a false sense that you've nipped the GC problem simply by switching from .NET/Java to D.

    Try allocating 1 gig of objects in D and then see what happens to the performance of your application when D's GC decides it's time to clean up your 1 gig object graph.
  86. Dustin,
    "Try allocating 1 gig of objects in D and then see what happens to the performance of your application when D's GC decides it's time to clean up your 1 gig object graph"

    That depends of the skill of the developer - to foresee those problem areas and in that case override the GC with his own memory allocation, which works in D like in C++.

    Regards
    Rolf Tollerud
  87. Dustin,"Try allocating 1 gig of objects in D and then see what happens to the performance of your application when D's GC decides it's time to clean up your 1 gig object graph"That depends of the skill of the developer - to foresee those problem areas and in that case override the GC with his own memory allocation, which works in D like in C++.

    Regards
    Rolf Tollerud

    And how would that be different than JNI or unsafe code in .NET?

    peter
  88. That depends of the skill of the developer - to foresee those problem areas and in that case override the GC with his own memory allocation, which works in D like in C++.RegardsRolf Tollerud

    Yes, and those implementation will rob you of your performance at some level. Depending on the implementation, one approach will steal cpu cycles and clean up little bits at a time, thus slowing down your overall real work execution. Another is to opt for the quickest performance by delaying GC as long as possible and then paying for it later by a long "stop the world" GC times.

    You can implement any number of hybrids of the two different extremes. Either way, performance is degraded. It's more of a choice of which is the most appropriate for what you are doing.

    Java can do this via JVM options, without having to implement your own solution. It may not be as fine grained, but for the majority of solutions, you can typically find the right balance simply with the appropriate combination of parameters.
  89. Jakarta impl of JAXP is slow, i agree.
    But what stop you from using faster implementation?
    When i discover this problem, I switched to Oracle 10g XDK (free, compact and 5 to 10 times faster for xslt transformation), it is also XML/XSLT 2 compliant.
    Look around!
  90. I can see two issues here offhand. First, it would be really nice to get some baseline performance improvements in certain areas in the Java environment. Obviously there is a lot of XML processing out there, so if we can get the Java-based systems to improve their performance in this (and related) areas, it would benefit everybody. I do have to say that in general Sun has done a pretty good job in recent years addressing speed and performance issues. Perhaps the issue here is not the actual capabilities of the JVM, but the difficulty in configuring and optimizing it? Tracing, tuning, testing, and such in a Java environment is very very complex, and thus most Java apps and deployments are likely running on suboptimal settings.

    (Think setting up a Unix box vs Windows.. you can super tweak a Unix system and craft something very custom, but frequently don't have a simple interface like many of the MS tools..)

    That said, if the overall Java environment was not performant, that would be an issue. However, the issue of a disparity on XML processing specifically is not as large an issue for me. I would say that the Java side either needs optimization or additional tuning and configuration.. however.. in a "real-world" environment, your XML processing isn't likely to be the bulk of your processing overhead. If your request/response process has say, 10% of its time taken up with XML on Java, with the rest doing the actual logic processing, you have a fairly limited benefit to optimization. Cutting your XML time in half is a 5% difference in the overall job.. certainly nice, but likely not the best place to target all optimizations unless you have tapped out all other optimizations. (Database, network, system settings, etc..)

    Just a few cents..
  91. I am working on a Pinto vs Vega benchmark. :)
  92. Will Sun remake the benchmarks on a 2.6 kernel?

    Regards
    Horia
  93. Most developers aren't writing applications to run a/the stock exchange - if they were they should be writing in assembler, or writing a Java MDA that optimizes platform dependent machine code.

    Squabbling over this type of benchmark is meaningless, and is the wrong battle to fight. There are three areas in which Microsoft has no strategic defense.

    1. Integrity - Microsoft will never overcome their reputation.
    2. Platform dependence - see integrity.
    3. Because of their 'quick to market with garbage' strategy, Microsoft will likely never overcome their reputation for insecure, unreliable, but fast-for-video-games reputation.

    Java has been and will continue to be optimized, and hopefully someday the Sun Solaris group will open their eyes and I can install a reliable pure Java Operating System.
  94. .Net faster that Java?[ Go to top ]

    Before in the good old times a heading like that would have created a mayor nuclear meltdown with hundreds of angry posts and endless accessions and counter accessions.
    Nowadays it merely causes a big yaaaaaaawn.

    Tell us something we don't know already.

    Regards
    Rolf Tollerud
  95. .Net faster that Java?[ Go to top ]

    Before in the good old times a heading like that would have created a mayor nuclear meltdown with hundreds of angry posts and endless accessions and counter accessions. Nowadays it merely causes a big yaaaaaaawn.Tell us something we don't know already.RegardsRolf Tollerud

    It is certainly not '.NET faster than Java'. It is '.NET 2.0 Beta Web Services faster than JWSDP or IBM's web services implementation'.

    This is, of course, not the same thing, no matter how much you wish it were.

    You have already seen, on previous posts, evidence of cases where Java is faster than .NET.

    "Performance Comparison of Java/.NET Runtimes (Oct 2004)"
    http://www.shudo.net/jit/perf/

    So, you obviously realise that the phrase '.NET faster than Java' is a false generalisation, so the 'yawn' arises from you trolling yet again.
    Tell us something we don't know already

    We do keep trying....
  96. .Net faster that Java?[ Go to top ]

    http://www.shudo.net/jit/perf/ uses Microsoft .NET Framework 1.1
    So, rigth now we don't know what gives M$ the huge performance gap: the runtime or WS implementations.

    Regards,
    Horia
  97. .Net faster that Java?[ Go to top ]

    http://www.shudo.net/jit/perf/ uses Microsoft .NET Framework 1.1So, rigth now we don't know what gives M$ the huge performance gap: the runtime or WS implementations.Regards,Horia

    Good point.

    However, I would suggest that it is pretty likely to be the WS implementation. In those benchmarks I posted, .Net 1.1, fast Java implementations, and good C/C++ were all reasonably close. I would suggest that an improvement in .Net performance that made it generally many times the speed of optimised C/C++ is unlikely, to say the least!
  98. .Net faster that Java?[ Go to top ]

    Good point. However, I would suggest that it is pretty likely to be the WS implementation. In those benchmarks I posted, .Net 1.1, fast Java implementations, and good C/C++ were all reasonably close. I would suggest that an improvement in .Net performance that made it generally many times the speed of optimised C/C++ is unlikely, to say the least!

    Maybe, but the facts still stand. They never said the .NET is faster than Java (at least in this benchmark). They presented some facts and it is hard to belive they lie.

    You suggest however that no runtime environment (Java, .NET, whatever) can't beat by a wide range the good old optimized C++ runtime, right? So given a platform, .NET and Java will be running head to head in terms of raw performance. Great, but the quality of some business implementation will make the difference (in this case probably the API's are pretty close since they did not tell us the number of source code lines difference :)) ).

    M$ wins this time at corporate advertisment level.

    I have seen with my own eyes that kernel 2.6 and java 1.5 (SUN's, never tested with BEA's) is a huge leap forward for multithreaded apps compared to 1.4.2 running on 2.4 kernels. I am sure SUN or somebody else from the java side will remake the benchmarks in their favor as long as the market will need that.

    Regards,
    Horia
  99. .Net faster that Java?[ Go to top ]

    Good point. However, I would suggest that it is pretty likely to be the WS implementation. In those benchmarks I posted, .Net 1.1, fast Java implementations, and good C/C++ were all reasonably close. I would suggest that an improvement in .Net performance that made it generally many times the speed of optimised C/C++ is unlikely, to say the least!
    Maybe, but the facts still stand. They never said the .NET is faster than Java (at least in this benchmark).

    Indeed, but I assumed that this was what you were suggesting as a possibility in the comment: "rigth now we don't know what gives M$ the huge performance gap: the runtime or WS implementations".
    They presented some facts and it is hard to belive they lie.

    I agree. I don't doubt that Microsoft has an excellent web services implementation. They have always provided high-performance libraries and tools for XML and XSL.
    You suggest however that no runtime environment (Java, .NET, whatever) can't beat by a wide range the good old optimized C++ runtime, right? So given a platform, .NET and Java will be running head to head in terms of raw performance.

    Exactly.
    Great, but the quality of some business implementation will make the difference (in this case probably the API's are pretty close since they did not tell us the number of source code lines difference :)) ). M$ wins this time at corporate advertisment level.I have seen with my own eyes that kernel 2.6 and java 1.5 (SUN's, never tested with BEA's) is a huge leap forward for multithreaded apps compared to 1.4.2 running on 2.4 kernels. I am sure SUN or somebody else from the java side will remake the benchmarks in their favor as long as the market will need that.Regards,Horia

    I agree. This competition between Java and .NET is healthy in that regard.
  100. .Net faster that Java?[ Go to top ]

    I am sure SUN or somebody else from the java side will remake the benchmarks in their favor as long as the market will need that.

    These benchmarks are expensive to run, and it seems that the companies with the biggest marketing budget probably will always dominate.

    Heinz
  101. .Net faster that Java?[ Go to top ]

    Horia, how do you expect anybody to take your opinions seriously when you continue to use that silly M$ abbreviation for Microsoft?

    There have been several major improvements in .NET 2.0 both in the CLR and specifically in the XML serializer which is used to create the XML for the web services. The original 1.1 implementation was less than optimal as it required reflection to generate new C# class, then that class had to be compiled and dynamically executed, all at runtime. Hard to believe they could improve on such a streamlined process. :-)

    The concept that C++ is in all cases faster than .NET is a myth. The .NET CLR is optimized in several ways that are better than traditional staticly linked C++ programs. One improvement in performance that leaps to mind is in JIT compilation. Since each method in the IL is compiled just before execution, the JIT compiler can make use of optimizations for the specific processor the application is running on. In a traditional C++ application you can take a gues at the CPU architecture but you never know for sure what the processor is it will be deployed on. Therefore certain optimizations can't be used.
  102. .Net faster that Java?[ Go to top ]

    Horia, how do you expect anybody to take your opinions seriously when you continue to use that silly M$ abbreviation for Microsoft?

    What opinions are you talking about, and what M$ abbreviation has to do with anything?

    Regards,
    Horia

    P.S. If you are talking about Java 1.5 on a 2.6 kernel issue, it is a fact and I don't really care if you don't belive me.
  103. Managed execution[ Go to top ]

    The concept that C++ is in all cases faster than .NET is a myth... One improvement in performance that leaps to mind is in JIT compilation. Since each method in the IL is compiled just before execution, the JIT compiler can make use of optimizations for the specific processor...

    I about the myth remark, but, imho, THE reason for the performance improvement is sooooooo simple: garbage collection. In today's OOP ( ;-)) ) coding style, where objects are coming and going away fast, malloc is A Very Bad Thing®. It's waaay too expensive time-wise, compared to a garbage collector. I think, once JIT compilers came, let's say, close to raw C/C++ compilation, any perf edge of the C compiled code was irretrievably lost. In fact, i think, the only way to get it back is to make malloc work in garbage-collection fashion.

    Yep, we traded size for speed here...
  104. .Net faster that Java?[ Go to top ]

    Horia, how do you expect anybody to take your opinions seriously when you continue to use that silly M$ abbreviation for Microsoft?
    Interesting...
    Microsoft used the $ for M$ themselves on the box for MS DOS 6.2
  105. "So, rigth now we don't know what gives M$ the huge performance gap: the runtime or WS implementations"

    So you don’t know? I tell you. In all micro benchmark situations it is important to always monitor the memory that is used. By giving it 4-5 times as much memory Java can sometimes beat C# in the tests because of the "hotspot" effect. But in real life however, memory is a scarcity that the always is too little of, therefore Java has all kinds of memory problems - hence the reason it loose all practical tests, like this one. That is not the only problem though. Outage problems and strange "freeze up periods" are another.

    Regards
    Rolf Tollerud
  106. In all micro benchmark situations it is important to always monitor the memory that is used. By giving it 4-5 times as much memory Java can sometimes beat C# in the tests because of the "hotspot" effect.

    All of the benchmarks that _you_ have provided have run faster on Java.

    However, I'm willing to take you up on this bet about the memory -- let's re-run these tests on a server with 64MB of RAM (total physical memory) and see which comes out ahead, the Java impl or Windows+C#.NET .. which one will be faster?

    How much money are you willing to put down? I'm good for US$100,000 .. ;-)

    Peace,

    Cameron Purdy
    Tangosol Coherence: Clustered Shared Memory for Java
  107. "So, rigth now we don't know what gives M$ the huge performance gap: the runtime or WS implementations"So you don’t know? I tell you. In all micro benchmark situations it is important to always monitor the memory that is used. By giving it 4-5 times as much memory Java can sometimes beat C# in the tests because of the "hotspot" effect.

    I think that you need to monitor all resources until you understand the resource utilizations when doing these types of benchmarks. Once you understand and have satisfied the resource requirements, you need to turn off monitoring.

    HotSpot is a profiler that directs the JIT. Memory is a totally different issue. Giving too much memory can have as detrimential effect on performance as not having enough memory. This is because of GC effeciency (or lack there-of).
    But in real life however, memory is a scarcity that the always is too little of

    I must humbly disagree with this statement. Memory is cheap and plentiful. So cheap that I have no trouble maximizing the amount of real memory that I’ve put in all my machines. With the machine full of memory, I no longer have a need to use virtual memory. I’ve turned that off on all of my machines and my experience has been a marked improvement in performance for all the applications that I run (Java and otherwise). I just don’t see the memory problems that you are referring to.


    Kirk
  108. I just don't see the memory problems that you are referring to. Kirk
    The only memory problem Rolf has regarding Java is this: he keeps forgeting not to troll this site! ;)
  109. Our dear Rofl[ Go to top ]

    The only memory problem Rolf has regarding Java is this: he keeps forgeting not to troll this site! ;)

    You all do know that "Rolf Tollerud" is a not too concealed anagram for "ROFL trolled u", right? (ROFL = Rolling On Floor Laughing.)

    A troll keeps making obviously incorrect statements on purpose. That is the way he tries to trap you into his quagmire of pointless arguing. There will always be kids who think trolling is a new joke.
  110. man[ Go to top ]

    i knew this thread was going to blow up the moment it was posted.
  111. I knew it. Here come the personal attacks. Very well I have stated my opinion anyway.

    With the micro benchmark it is advantageous to set up the memory but in real life applications "small is beautiful rules". Eat it or not, I don't care.
  112. With the machine full of memory, I no longer have a need to use virtual memory. I’ve turned that off on all of my machines and my experience has been a marked improvement in performance for all the applications that I run (Java and otherwise).
    How much memory are you running with? And are you using Windows? If so how do you turn off Virtual memory? (I can only see a way to decrease it to a minimum of 2Mb.)
  113. Kirk,
    "I must humbly disagree with this statement. Memory is cheap and plentiful"

    You forget that even if you have "enough memory", it shall still be handled, checked for garbage etc. It is easier to manage an application that uses 20MB of memory than one who use 200. (not to speak of 400, 800 and so on). Also much more that can go wrong.

    Regards
    Rolf Tollerud
  114. Not that anyone will care[ Go to top ]

    I'm not sure the article is correct when it says SAX is a pull parser. It is an event driven parser, but strictly speaking, it is not the same as XPP or the newer Stream Pull parser. I have a test application with some heavy business logic and I can easily get 400-450 tranx per second. I haven't looked at the code for the benchmark, but that's one reason I don't use SOAP/WS in my application. Instead, I just use XStream + XPP + Schema. It's much lighter, faster and allows my business app to eat as much CPU as it wants. If I used crimson + SOAP + WS, the app would run like a snail.

    peter
  115. Re: Not that anyone will care[ Go to top ]

    And therein lies the difference - all of the choices that Java developers have. Depending on your character, this may well determine your platform preference, besides of course the portability. MS will always choose to compare .NET with Sun and IBM solutions, since they are the standard-bearers in the Java world and the raw benchmarks tend to work in favour of MS. For those interested in developing valuable, maintainable software, using the components best suited to the job, the p*ssing contest between the two camps is of little interest except for the undoubted great improvements in Java brought by the competition from .NET - thanks Bill!
  116. P.S.[ Go to top ]

    I am sorry Kirk, I overlooked that you yourself stated:

    "Giving too much memory can have as detrimential effect on performance as not having enough memory. This is because of GC effeciency (or lack there-of)"

    Precisely. Small is beatiful!
  117. To Rolf or not to Rolf[ Go to top ]

    ......

    So, according to Rolf having little or much memory is bad for applications. That is so funny to read...

    Please, man, get back to school.
  118. Kirk,"I must humbly disagree with this statement. Memory is cheap and plentiful"You forget that even if you have "enough memory", it shall still be handled, checked for garbage etc. It is easier to manage an application that uses 20MB of memory than one who use 200. (not to speak of 400, 800 and so on). Also much more that can go wrong.RegardsRolf Tollerud

    That's work for the Operating System ( a good should handle that properly) or the Application shell provided with your framework. (CLR,JVM,FTEXEC,etc)

    Memory leaks will still be there... but only due to poor programming.
  119. Kirk,"I must humbly disagree with this statement. Memory is cheap and plentiful"You forget that even if you have "enough memory", it shall still be handled, checked for garbage etc. It is easier to manage an application that uses 20MB of memory than one who use 200. (not to speak of 400, 800 and so on). Also much more that can go wrong.RegardsRolf Tollerud
    That's work for the Operating System ( a good should handle that properly) or the Application shell provided with your framework. (CLR,JVM,FTEXEC,etc)Memory leaks will still be there... but only due to poor programming.

    That is a different issue altogether!
  120. I just don’t see the memory problems that you are referring to.

    Well, simple 32-bit Wintel machines can still mostly address 4GB of RAM.
  121. But in real life however, memory is a scarcity that the always is too little of
    I must humbly disagree with this statement. Memory is cheap and plentiful. So cheap that I have no trouble maximizing the amount of real memory that I’ve put in all my machines. With the machine full of memory, I no longer have a need to use virtual memory. I’ve turned that off on all of my machines and my experience has been a marked improvement in performance for all the applications that I run (Java and otherwise). I just don’t see the memory problems that you are referring to. Kirk

    In a serious and real enterprise environment it is right.

    But for Rolf it isn't. He is constrained by the 256MB RAM his client (mostly a one man operation) has in his old 486...
  122. By giving it 4-5 times as much memory Java can sometimes beat C# in the tests because of the "hotspot" effect.

    Nonsense. The Hotspot optimiser does not impose or require large memory use. It even runs on some CLDC implementations - in mobile phones.
  123. Ok. So why does Java always runs slower than C# when you give it the same amount of memory?

    http://www.theserverside.com/discussions/thread.tss?thread_id=19226#82312

    Acording to you?
  124. Ok. So why does Java always runs slower than C# when you give it the same amount of memory? http://www.theserverside.com/discussions/thread.tss?thread_id=19226#82312Acording to you?

    I don't take a single microbenchmark run by a single person as backing a statement as broad as 'Java always runs slower than C# when you give it the same amount of memory'.

    If you really believe this, accept Cameron's challenge:
    http://www.theserverside.com/news/thread.tss?thread_id=34396#173598
  125. "If you really believe this, accept Cameron's challenge"

    I have already done that in the thread ".NET webservices outperforms J2EE webservices in new test", where Cameron tried every trick in the book to hide that he set up the memory. Please follow the link I gave you above. It is a little taxing to converse with Cameron so you have to excuse me, read the old thread instead.
  126. Please follow the link I gave you above. It is a little taxing to converse with Cameron so you have to excuse me, read the old thread instead.

    The test shown in that link was rather odd. you were comparing Java Decimal (unlimited precision) with C# decimal (128 bit). It was not comparing like with like.

    If you want to try something that is standardised, why not try the linpack code in that link I gave you? Those are complex tests that compare a range of numerical methods, and are respected international standards, not a single unrepresentative microbenchmark. Try these tests with different memory sizes. I would be interested in the results.
  127. "If you really believe this, accept Cameron's challenge"I have already done that in the thread ".NET webservices outperforms J2EE webservices in new test", where Cameron tried every trick in the book to hide that he set up the memory. Please follow the link I gave you above. It is a little taxing to converse with Cameron so you have to excuse me, read the old thread instead.

    Rolf, nobody doubts your unbreakable conviction and religion in this matter. In fact I believe that it has always been more important to defend your faith than prove anything with evidence. Faith, afterall, is believing in something one can not prove.
  128. It is a little taxing to converse with Cameron ...
    LOL! You should be in our shoes.
  129. It is a little taxing to converse with Cameron ...
    LOL! You should be in our shoes.

    Rolf is right, I'm changing my life today - Im jumping into .NET cause his arguments are very powerful and persuasive.

    All those countless hours of posting/trolling has finally paid off, he's been able to convert ONE Java developer to go .NET with the power of his words.

    Well done Rolf and thank you. hehe.
  130. .Net faster that Java?[ Go to top ]

    Before in the good old times a heading like that would have created a mayor nuclear meltdown with hundreds of angry posts and endless accessions and counter accessions. Nowadays it merely causes a big yaaaaaaawn.Tell us something we don't know already.RegardsRolf Tollerud

    I'll tell you something you do already know. Every time someone (be it MS, Sun, Oracle, IBM, etc) releases a benchmark, it's tilted in their own favor. I looked at the comments on this artice only to see if there was going to be the usual bickering.

    I am heartened by the lack of "mine is better than yours" bickering. The posts I've seen are constructive comments about which APIs are better for doing specific tasks.

    I wonder who invented benchmarking. I suspect it was either Lenin or Himmler. Either way, it's disinformation.

    John Murray
    Sobetech
  131. John,
    "Every time someone (be it MS, Sun, Oracle, IBM, etc) releases a benchmark, it's tilted in their own favor"

    Interesting. I never seen a benchmark commissioned from Sun, Oracle or IBM. Come to think of it I never have seen any coming directly from these companies either.

    Perhaps you can direct me to one of these strang beasts?

    Regards
    Rolf Tollerud
  132. I never seen a benchmark commissioned from Sun, Oracle or IBM.

    Oracle did the Petstore rematch that was 28x faster than the .NET one .. you might remember that it's the one that the load test didn't check the results, which as it turns out were probably all 503s.

    Peace,

    Cameron Purdy
    Tangosol Coherence: Clustered Shared Memory for Java
  133. John,"Every time someone (be it MS, Sun, Oracle, IBM, etc) releases a benchmark, it's tilted in their own favor"Interesting. I never seen a benchmark commissioned from Sun, Oracle or IBM. Come to think of it I never have seen any coming directly from these companies either.Perhaps you can direct me to one of these strang beasts?RegardsRolf Tollerud

    http://www.theserverside.com/news/thread.tss?thread_id=13700
  134. According to my own benchmark, Java beats .NET in every aspect. I know Java, so as a consultant it brings me more money. Therefore, Java is the better technology, I can set up any kind of performance, scalability, reliability, availability, extensibility, maintenability or manageability test to prove that.
  135. More sweeping statements[ Go to top ]

    According to my own benchmark, Java beats .NET in every aspect. I know Java, so as a consultant it brings me more money. Therefore, Java is the better technology, I can set up any kind of performance, scalability, reliability, availability, extensibility, maintenability or manageability test to prove that.

    I have worked with both Java and .NET, and could show you in cases where .NET beats Java hands down. Not all cases, but some. But this ship has sailed, .NET will naturally beat Java (in certain areas) due to the optimizations one can afford in the CLR by tying it to a single operating system. It makes basic 101 Computer Science Class sense.

    But you claim to make Java win in any case Tomi intriges me; go on then, put your skills to the test and show us how you can make this particular test running under Java faster.
  136. More sweeping statements[ Go to top ]

    Alan,

    My statements were supposed to be sarcastic. It's hard to create an independent benchmark and impossible if tests are funded by a company that sells the participating product.
  137. More sweeping statements[ Go to top ]

    Alan, My statements were supposed to be sarcastic.

    Then Tomi, i apologise. Another case of where humour doesn't necessarily translate online. Gotcha! ;)
  138. More sweeping statements[ Go to top ]

    But this ship has sailed, .NET will naturally beat Java (in certain areas) due to the optimizations one can afford in the CLR by tying it to a single operating system. It makes basic 101 Computer Science Class sense.

    I would disagree. Java can be highly optimised, not for a single operating system, but for individual processors. The work that the Hotspot system does is phenomenal, even to the extent of run-time re-arrangement of machine instructions to get the best use of multiple pipelines inside the CPU in some cases. I see no reason why a Java VM should not include more optimisations for an Intel or AMD CPU than a CLR VM.
  139. year after year same[ Go to top ]

    "Therefore, Java is the better technology, I can set up any kind of performance, scalability, reliability, availability, extensibility, maintenability or manageability test to prove that"

    Yes, that was what they said last year too, with their 8000 bugs.. BTW, did you make much money with EJB 1.0? Do you know how much money <í>the customers> lost on EJB1.0?
  140. year after year same[ Go to top ]

    "Therefore, Java is the better technology, I can set up any kind of performance, scalability, reliability, availability, extensibility, maintenability or manageability test to prove that"Yes, that was what they said last year too, with their 8000 bugs.. BTW, did you make much money with EJB 1.0? Do you know how much money <í>the customers> lost on EJB1.0?

    I don't know many companies who have lost money with technologies. However I know a lot of "customers" who lost a lot of money (and still are losing) because of stupid consultants/employees/ and wanna be techies.

    As for the WebServices discussions, how can we really compare ? Isn't Microsoft using a C/Assembly coded XML Parser and not a pure C#.NET implementation ? that's not fair comparaison.

    a++ Cédric
  141. life is unfair[ Go to top ]

    As for the WebServices discussions, how can we really compare ? Isn't Microsoft using a C/Assembly coded XML Parser and not a pure C#.NET implementation ? that's not fair comparaison.

    That’s true Cédric. That is how Microsoft usually wins these performance issues - by sprinkling optimized C code or even inline assembler in critical places.

    Regards
    Rolf Tollerud
  142. year after year same[ Go to top ]

    OK,just to clear some things out

    >Ok, say that I buy that. Let us also say that I am buying >"The J2SE 5.0 release of the HotSpot™ virtual machine >includes 8000 bug fixes."

    There was NEVER, NEVEREVER, NEVERNEVEREVER 8000 bugs in HotSpot compiler, not even when it was in alpha stage Smalltalk compiler in Animorphic labs.

    There were around 8000 bugs and RFE's fixed in JDK5.0. Which is actually pretty damn impressive. Those bugs are ranging from typos in docs, small one line fixes, to complex RFE's like adding OpenGL acceleration on Unix boxes. And there are probably even more bugs and RFE's still in there. Like in every other complex software product. Or do you think that .Net is bug free? Or .Net has 5-10 bugs? Do you know anything at all about software development and bug tracking? At least Sun is brave enough to keep their bugs publically available. So we all know how many are still in there, and what are possible workarounds. I'm not sure I can see that much with .Net and Windows.

    >Do you know how much money <í>the customers> lost on >EJB1.0?

    No. Do you?
  143. year after year same[ Go to top ]

    Microsoft has a product feedback center which keeps track of all reported bugs of its .NET 2.0 and VS 2005 beta. Right now it's at 12666 total bugs reported and 6658 suggestions. There are 29465 participants.
  144. One year old story[ Go to top ]

    John,"Every time someone (be it MS, Sun, Oracle, IBM, etc) releases a benchmark, it's tilted in their own favor"Interesting. I never seen a benchmark commissioned from Sun, Oracle or IBM. Come to think of it I never have seen any coming directly from these companies either.Perhaps you can direct me to one of these strang beasts?RegardsRolf Tollerud


    The code and benchmark used here were developed and released last year by Sun.
  145. old news[ Go to top ]

    Oh come on, anyone that has had to build XML driven services knows not to use DOM, crimson, xerces or xalan when performance is critical. You're better off using XPP or the new stream parser, which blow away the older parsers. XPP3 is more memory and CPU efficient and generally will beat the older java xml parsers. Also, why in the world are they using websphere. Use tomcat for god sake.

    </rant>

    Aside from that, chances are if the webservice is expose to third parties over the internet, it won't reach 1k transactions per second unless you happen to have a OC3 or multiple T3 lines. Most businesses can't afford tier 1 hosting. Tier 1 hosting being the backbone providers who own their own fiber. Everyone else just leases the line or rents rack space. Even then, paying for unrestricted bandwidth isn't cheap.

    Of the big firms I know, they all have their own lines. In fact, all of them have redundant lines to their network ops. the really big guys have multiple facilities with multiple lines to multiple providers. this way they insure a loss of one connection doesn't bring down the whole system. Most of the companies that can afford this stuff are using J2EE.

    who is the target audience of this benchmark?

    peter
  146. Websphere?[ Go to top ]

    So they looked around for the slowest WS implementation?
  147. This group seems to have a chip on it’s shoulder. It’s like -- don’t say anything bad about my knowledge base or I will use all my engineering savvy to further confuse matters into oblivion. Get with it, technology and business trends along with globalization will bury those who get too comfortable with one language. Learn French or Italian also for God’s sake. At the end of the day it is what it is.

    Minds bigger than yours. No its not. Yes it is. No it’s Not!!! Gazillion infinity… Infinity Infinity…

    /yak
  148. Java is a standard. A standard for accessing databases, delivering dynamic content, processing business logic. It works on all the platforms and plays well with vendors.

    My very biased opinion is that if you want interoperability, do Java.
  149. Out of curiosity, has anybody run faster or more memory efficient WS implementations in Java? I've got one that is Axis-based and it runs reasonably, but it can't handle particularly large messages.. it chokes around 15MB for the SOAP message. I can understand that its preferable to use SOAP with attachments in this situation, but if I don't have that flexability right now, are some soap implementations more memory friendly than others? This is using Axis 1, which is certainly not STAX or anything else that may be more memory efficient..
  150. hardware acceleration[ Go to top ]

    Out of curiosity, has anybody run faster or more memory efficient WS implementations in Java? I've got one that is Axis-based and it runs reasonably, but it can't handle particularly large messages.. it chokes around 15MB for the SOAP message. I can understand that its preferable to use SOAP with attachments in this situation, but if I don't have that flexability right now, are some soap implementations more memory friendly than others? This is using Axis 1, which is certainly not STAX or anything else that may be more memory efficient..

    if you're realy handling files that big, I would suggest considering hardware XML accelerator. There are several manufacturers out there and it can provide 10x improvement over xerces/xalan/crimson. I don't work for Sarvega, but I do know their product is pretty fast.

    peter
  151. Mind’s bigger than yours[ Go to top ]

    That’s the problem with standards, there’s too many of them. Most big companies have to maintain many skill-sets to manage a host of business from acquisitions and internal technological advances in order to have control over their legacy. Oh, so I have an idea, let’s try to consolidate a platform and maybe the word “standard” can have meaning again or at least help an industry work toward bridging efforts formed from differences. With all do respect for XML! But let’s not confuse matters, because what’s different is the same – you know – it’s a standard.

    /yak
  152. I guess that Resin guys have done a great job in developing their own implementation of XML parser and Web Services libraries. It would be interesting if we could test .NET against Resin...
  153. As echoed elsewhere, I would be inclined to use different XML libraries. This implicit ability to tune by selecting a library or implementation is a great benefit in the Java environment. For .NET I have only seen MONO as an alternative, which usually benches far below the Microsoft CLR. So, perhaps I'm uninformed ... what great alternatives do I have in the .NET arena when I face performance issues?
  154. Heinz writes:
    However, up to now, I have not seen any major improvements in JDK 1.5, that would double the performance of a Java program. PLEASE PROVE ME WRONG! On a micro benchmark level, you would always find those 2x improvements, but also degradations of performance. I am not looking for that, but rather, a massive improvement in the runtime behaviour of a typical Java program.

    Have you seen the 1.5 performance whitepaper?
    http://java.sun.com/performance/reference/whitepapers/5.0_performance.html

    What about the 175+% improvement in specjbb, 20% with VolanoMark, 10 to 20% in startup, 10 to 15% reduction in footprint.
    What about parallel gc and cms?
    What about specific optimizations like the one for System.arraycopy()?

    Heinz, do you homework before rushing to misinform in your "newsletter"!
  155. What about the 175+% improvement in specjbb, 20% with VolanoMark, 10 to 20% in startup, 10 to 15% reduction in footprint.
    What about parallel gc and cms?
    What about specific optimizations like the one for System.arraycopy()?

    Ok, say that I buy that. Let us also say that I am buying "The J2SE 5.0 release of the HotSpot™ virtual machine includes 8000 bug fixes."

    But that means that when I and Cameron had our performance discussion in 2003 he was defending a system that after 8 years existence and hype had 8000 bugs in the hotspot virtual machine only, and God knows how many elsewhere. Ponder that. :)

    Java was hailed to the sky every year from the beginning. It it best for us all that we don't start to dig about Java quality in 1995...

    You must understand that even if Java is perfect now, flawless (if anybody believes it), the market does not allow a product to come into being this way. The reputation is destroyed for ever. And who’s fault is that?

    Regards
    Rolf Tollerud
  156. But that means that when I and Cameron had our performance discussion in 2003 he was defending a system that after 8 years existence and hype had 8000 bugs in the hotspot virtual machine only, and God knows how many elsewhere. Ponder that. :)Java was hailed to the sky every year from the beginning. It it best for us all that we don't start to dig about Java quality in 1995...You must understand that even if Java is perfect now, flawless (if anybody believes it), the market does not allow a product to come into being this way. The reputation is destroyed for ever. And who’s fault is that?RegardsRolf Tollerud

    Remember that the Hotspot VM is continuously being evolved, written and re-written since Sun acquire the company that started it. It's not the same source that Sun's been debugging for the past 8 years. See for yourself, download the source! Maybe Sun should advertise what goes into each release in terms of compiler optimizations, GCs (eg. CMS, ParGC) and other runtime features and improvements.
    VM technology has always been a hot topic of research (eg. MRE 'O5).Most of us just look at the new APIs that get added the platform, not the underline engine that supports it.

    Concerning your second point, "The reputation is destroyed for ever" -- The specs are out there and anyone's free to put out an implementation. If you think Sun's is buggy, go to IBM's or BEA's or Kaffe or even SableVM. But blaming Sun for the "reputation" of Java is somewhat disingenuous.
  157. The same thing with EJB. 1.0 acclaimed as the finest thing since slice bread, later: "done by alien from outer space" (Cameron Purdy).

    I admit that Java seems to perform better with sdk 1.5. Here are some numbers,
    _______________________

    C:\jdk1.5.0_01\bin\java.exe -classpath . Linpack

    500 x 500
    198.657 MFlops

    1000 x 1000
    199.008 MFlops
    _______________________

    .NET 1.1

    500 x 500
    223,556 MFlops

    1000 x 1000
    303,509 MFlops
    _______________________

    C:\jdk1.5.0_01\bin\java.exe -classpath . -server Linpack

    500 x 500
    356.738 MFlops

    1000 x 1000
    369.021 MFlops
    _______________________

    1.5 ignores -Xms -Xmx settings, (at least from what I can see in task Manager) and uses only marginally more memory than .NET.

    The main difference between the Java and .NET at the moment is that .NET can compile programs and libraries when they are installed, and then those programs and libraries can be loaded almost like C code, can be shared between processes, and can be mapped into memory or cached just like ordinary unmanaged modules. This is done for all the system libraries that come with .NET which gives you a better startup time since there's lots of basic library code that does not need to be JITed.

    Conclusion.

    It is all well and done, but the point is that all this comes too late, as demonstrated by job-numbers at indeed.com. 10 years to construct a decent language & libraries?

    Regards
    Rolf Tollerud
  158. P.S.[ Go to top ]

    Is there any sadder words in the English language than, "it could have been"?
  159. micro benchmarking[ Go to top ]

    Heinz writes:
    However, up to now, I have not seen any major improvements in JDK 1.5, that would double the performance of a Java program. PLEASE PROVE ME WRONG! On a micro benchmark level, you would always find those 2x improvements, but also degradations of performance. I am not looking for that, but rather, a massive improvement in the runtime behaviour of a typical Java program.
    Have you seen the 1.5 performance whitepaper?http://java.sun.com/performance/reference/whitepapers/5.0_performance.htmlWhat about the 175+% improvement in specjbb, 20% with VolanoMark, 10 to 20% in startup, 10 to 15% reduction in footprint. What about parallel gc and cms?What about specific optimizations like the one for System.arraycopy()?Heinz, do you homework before rushing to misinform in your "newsletter"!
    We all have written much about the dangers of misinterpreting or extrapolating results from benchmarks. I myself found that the first application that I experimented with ran 10% slower in the 1.5.0 (using 1.4.2_04 as the baseline). Does that you’ll find that the 1.5.0 is slower? That depends on what your application does and more importantly, how it works to achieve that behavior.

    To call Heinz’s newsletter propaganda is akin to calling VolonoMark, Specjbb and the other benchmarks propaganda. These benchmarks as well as Heinz’s work have been carefully crafted to demonstrate very specific micro-performance improvements. Applications are mixtures of these behaviors (as well as others) and the overall effectiveness or visibility of any optimization will vary greatly. For example, my application apparently didn’t take advantage of the feature optimized int specjbb or VolonoMark so as far as I’m concerned, the 1.5.0 makes my life worse!

    So, we have a 10-20% improvement in startup. Given that this number applies equally to all applications, then moving to the 1.5.0 means that I give up 10% performance for a savings of 200ms on startup time for my server application.

    This “performance improvement” maybe valuable to you but I wouldn’t give a rats a$$ for it ;) which brings me to my second point. you may care about that 200ms (seriously you may) but I don’t. Even worse, your performance improvement may cost me! Take the new Hashmap implementation that Heinz has so carefully documented. The new bitmapped hash may hash faster but the hash creates more collisions. That cause performance of this O(1) data structure to move towards O(n). So the question is, do you want a slower hash to create a better map or do you want a fast hash to create a map that may not be so nicely distributed. If you base you answer on a benchmark that doesn’t map well to you application, you’ll most likely make the wrong decision.

    Instead of criticizing Heinz, you should be thanking him for spending his valuable time discovering interested features of the language and the platform and sharing them with all. He doesn’t get paid for doing this, it’s a way of contributing back to the community.

    Kirk
  160. micro benchmarking[ Go to top ]

    Heinz writes:
    However, up to now, I have not seen any major improvements in JDK 1.5, that would double the performance of a Java program. PLEASE PROVE ME WRONG! On a micro benchmark level, you would always find those 2x improvements, but also degradations of performance. I am not looking for that, but rather, a massive improvement in the runtime behaviour of a typical Java program.
    Have you seen the 1.5 performance whitepaper?http://java.sun.com/performance/reference/whitepapers/5.0_performance.htmlWhat about the 175+% improvement in specjbb, 20% with VolanoMark, 10 to 20% in startup, 10 to 15% reduction in footprint. What about parallel gc and cms?What about specific optimizations like the one for System.arraycopy()?Heinz, do you homework before rushing to misinform in your "newsletter"!
    We all have written much about the dangers of misinterpreting or extrapolating results from benchmarks. I myself found that the first application that I experimented with ran 10% slower in the 1.5.0 (using 1.4.2_04 as the baseline). Does that you’ll find that the 1.5.0 is slower? That depends on what your application does and more importantly, how it works to achieve that behavior.To call Heinz’s newsletter propaganda is akin to calling VolonoMark, Specjbb and the other benchmarks propaganda. These benchmarks as well as Heinz’s work have been carefully crafted to demonstrate very specific micro-performance improvements. Applications are mixtures of these behaviors (as well as others) and the overall effectiveness or visibility of any optimization will vary greatly. For example, my application apparently didn’t take advantage of the feature optimized int specjbb or VolonoMark so as far as I’m concerned, the 1.5.0 makes my life worse!So, we have a 10-20% improvement in startup. Given that this number applies equally to all applications, then moving to the 1.5.0 means that I give up 10% performance for a savings of 200ms on startup time for my server application.This “performance improvement” maybe valuable to you but I wouldn’t give a rats a$$ for it ;) which brings me to my second point. you may care about that 200ms (seriously you may) but I don’t. Even worse, your performance improvement may cost me! Take the new Hashmap implementation that Heinz has so carefully documented. The new bitmapped hash may hash faster but the hash creates more collisions. That cause performance of this O(1) data structure to move towards O(n). So the question is, do you want a slower hash to create a better map or do you want a fast hash to create a map that may not be so nicely distributed. If you base you answer on a benchmark that doesn’t map well to you application, you’ll most likely make the wrong decision.Instead of criticizing Heinz, you should be thanking him for spending his valuable time discovering interested features of the language and the platform and sharing them with all. He doesn’t get paid for doing this, it’s a way of contributing back to the community.Kirk

    I don't recall calling Heinz's newsletter 'propaganda', it contains misinformation that he, and to his credit, sometimes corrects in follow up emails. And I certainly didn't say that any app running on 1.4.2 will run faster on 1.5. It may if you tune it to take advantage of the new features and enhancements (eg. ergonomics, enabling adaptive sizing policy, etc). The classic example is with Vectors and Hashtables, you'll most likley see a performance degradation going from 1.3.1 to 1.4.2 to 1.5.0 but if you change your implementation to use List and Sets you'll see an improvement. If your app is not taking advantage of the new features, make it so that it does and you most likely will see a difference. If not, report it to Sun or make others aware of the problems. But please don't go claiming "I have not seen any major improvements in JDK 1.5". It is disingenuous at best and ignorant at worse. Is this an attempt to get Sun to invest more in the areas of performance or are you trying to convince the community to move to BEA or IBM? I don't see how saying something like that would otherwise help you or anyone else as a matter of fact.

    As far as benchmarking is concerned, we don't have any other instruments of comparison except for industry standard benchmarks when talking about performance, that's why I mentioned SpecJBB and volanomark. They may be micro benchmarks to you, but others use them to make purchasing decisions.

    Again, no one's forcing anyone to move to 1.5. If you have a softswitch app where you need a reponse time of less than 200ms, then by all means go try it. If you fail to take advantage of the improvements, stick with what you're running and wait for Moore's law to improve your app's performance.

    Sun has to be commended for trying to improvement their VM in all areas at the same time, startup, compiler, gc, runtime, footprint and others while adding tons of features to the platform. Show me another VM that attempts to do that!

    Heinz does a pretty good job covering new apis and techniques, but when it comes to performance it's not the first time he's put his foot in his mouth. Look at his earlier newsletters about performance.
  161. micro benchmarking[ Go to top ]

    Heinz writes:
    On a micro benchmark level, you would always find those 2x improvements, but also degradations of performance. I am not looking for that, but rather, a massive improvement in the runtime behaviour of a typical Java program.
    Heinz does a pretty good job covering new apis and techniques, but when it comes to performance it's not the first time he's put his foot in his mouth. Look at his earlier newsletters about performance.

    I do tend to put my foot in my mouth, and it hurts! And you are correct, I did once published a performance newsletter with incorrect conclusions based on a problem in my performance test harness(http://www.javaspecialists.co.za/archive/Issue070.html).

    In this case, I stand by what I wrote, and I hope that time will prove me wrong. I really don't mind. We are fighting a battle against a company with an unlimited amount of resources and very smart engineers.

    That said, do you have an example of a real-world program that you have written, and which now has a "massive improvement in the runtime behaviour"?

    Heinz
    --
    http://www.javaspecialists.co.za
  162. micro benchmarking[ Go to top ]

    Heinz writes:
    On a micro benchmark level, you would always find those 2x improvements, but also degradations of performance. I am not looking for that, but rather, a massive improvement in the runtime behaviour of a typical Java program.
    Heinz does a pretty good job covering new apis and techniques, but when it comes to performance it's not the first time he's put his foot in his mouth. Look at his earlier newsletters about performance.
    I do tend to put my foot in my mouth, and it hurts! And you are correct, I did once published a performance newsletter with incorrect conclusions based on a problem in my performance test harness(http://www.javaspecialists.co.za/archive/Issue070.html).In this case, I stand by what I wrote, and I hope that time will prove me wrong. I really don't mind. We are fighting a battle against a company with an unlimited amount of resources and very smart engineers.That said, do you have an example of a real-world program that you have written, and which now has a "massive improvement in the runtime behaviour"?Heinz--http://www.javaspecialists.co.za

    Most "real world" apps can be modelled as a set of micro-benchmarks, so it's not unexpected to see the improvements Sun is claiming. (eg. Volanomark is derived from a 'real world' app).

    Heinz, I don't mean to be on your case, I enjoy your newsletters but please try to avoid making blanket statements such as the one you made, especially given your position in the community as an educator.
  163. micro benchmarking[ Go to top ]

    I don't recall calling Heinz's newsletter 'propaganda', it contains misinformation that he, and to his credit, sometimes corrects in follow up emails.

    I don't know if English is your first language so forgive me for presuming not and offering this explination. The word misinformation is very provocative in that it implies that Heinz is deliberatly trying to mis-lead us. If that were the case, why would publish a correction. It would seem that instead of trying to mis-lead us, he is trying to inform us.
    Consider this, just about every author that I've talked to will not publish a retraction or correction.
    And I certainly didn't say that any app running on 1.4.2 will run faster on 1.5. It may if you tune it to take advantage of the new features and enhancements (eg. ergonomics, enabling adaptive sizing policy, etc). The classic example is with Vectors and Hashtables, you'll most likley see a performance degradation going from 1.3.1 to 1.4.2 to 1.5.0 but if you change your implementation to use List and Sets you'll see an improvement.

    What I expect is that moving to the next version should *NOT* result in a degredation of performance of an existing application. IOW, I don't want to pay a greater performance penalty for using Vector then I already am. I also would not necessary expect a performance improvement unless I started using the newer features. I might start accidentially using the newer "technical tuning" features. For example default GC changes from a single threaded M&S in the 1.4 to a parallel version in the 1.4 (defaults to a single thread on a single CPU machine).
    If not, report it to Sun or make others aware of the problems. But please don't go claiming "I have not seen any major improvements in JDK 1.5".

    Well is the truth be known, it's not only Heinz's experience, it's mine and I know it's Jack Shirazi's (he's got something here on it. That said, there are conditions where I will strongly recommend that a client move to the 1.5. It needs to be considered on a case by case basis.

    Heinz can speak for himself but I think he is refering to the big changes in the JVM that have resulted in big improvements in performance. These changes started with JITs, HotSpot profiling, on stack replacement, breaking into loops to perform OSR, the development of seperate client and server libraries. These developments were all the result of research and the question is, who is doing that type of research today? To be fair, IBM, Sun, and BEA are doing research in the area of memory management. But, Sun had decided against having two teams for the development of the client and server libraries. AFAIK, these libraries will be merged. There are many other areas where cuts in research are happening and this is happening at a time when the performance question seems to becoming more important as .NET catches up (and .NET will make gains because it has no other direction to travel).
    As far as benchmarking is concerned, we don't have any other instruments of comparison except for industry standard benchmarks when talking about performance

    But you do, you have the applications that you work with everyday. Granted these are not carefully prepared benchmarks designed to answer very specific questions. But, the are the measure of performance that counts.

    Kirk
  164. micro benchmarking[ Go to top ]

    I don't know if English is your first language so forgive me for presuming not and offering this explination. The word misinformation is very provocative in that it implies that Heinz is deliberatly trying to mis-lead us.

    answer.com says:- misinform: To provide with incorrect information.

    Where does it say "mis-lead"?

    What I expect is that moving to the next version should *NOT* result in a degredation of performance of an existing application. IOW, I don't want to pay a greater performance penalty for using Vector then I already am. I also would not necessary expect a performance improvement unless I started using the newer features. I might start accidentially using the newer "technical tuning" features. For example default GC changes from a single threaded M&S in the 1.4 to a parallel version in the 1.4 (defaults to a single thread on a single CPU machine).

    And why would Sun's VM be any different in this regard that say IBM's, BEA's or any other software? Just because there's a new version out there, it doesn't mean it will run your app faster, slower or even at the same speed. That's for you to find out *before* upgrading. The same goes for hardware, just because you upgrade to a faster CPU, it doesn't mean your I/O bound app will run faster.

    These developments were all the result of research and the question is, who is doing that type of research today? To be fair, IBM, Sun, and BEA are doing research in the area of memory management. But, Sun had decided against having two teams for the development of the client and server libraries. AFAIK, these libraries will be merged. There are many other areas where cuts in research are happening and this is happening at a time when the performance question seems to becoming more important as .NET catches up (and .NET will make gains because it has no other direction to travel).

    Who knows what's Sun's cooking but why would merging the compilers be a bad thing? If I can get the benefits of the client compiler for startup and footprint and the optimization of the server compiler, i'm all for it! They've already done it for the gc (ie. adaptive sizing), hope they do it to the compiler. But maybe your views differ because that means less work for "consultants"...

    I disagree with you, research in VMs hasn't slowed down, quite the opposite... just look at these:
    http://www.research.ibm.com/mre05/
    http://www.research.ibm.com/ismm04/
    http://www.usenix.org/events/vm04/
    http://www.ecn.purdue.edu/icpp2004/

    And Sun's is certainly not slowing down, with 1.5 they gave us Parallel and Concurrent GCs!
  165. Microsoft way...[ Go to top ]

    Well it's always been a Microsft way - first to claim the superiority of their product with a massive propaganda and then slowly catch-up...

    Java (hotspot) has a broad field of unimplemented (yet) optimizations... but it's defintly cutting-age one.

    good reading:
    http://www.research.ibm.com/people/d/dgrove/talks/SoftwareOptimizationAndVirtualMachines.pdf

    Cheers,

    D.
  166. micro benchmarking[ Go to top ]

    answer.com says:- misinform: To provide with incorrect information.Where does it say "mis-lead"?

    As I said, misinformation implies a deliberate attempt which is different then a mistake.
    Just because there's a new version out there, it doesn't mean it will run your app faster, slower or even at the same speed. That's for you to find out *before* upgrading.

    agreed but we are talking about code that should be stable. IOWs, I see no reason for Sun to diddle with the implementation of Vector. There maybe other changes in the JVM that may account for a slowdown but IMHO, it implies that they've gone and broken something.
    The same goes for hardware, just because you upgrade to a faster CPU, it doesn't mean your I/O bound app will run faster.

    But nor should a faster CPU slow an I/O bound appliation down.

    Who knows what's Sun's cooking
    I think they are being very transparent in what they are putting into Mustang. The coming performance enhancements are either very specialized or low level from what I can see. Certianly nothing earth shaking

    but why would merging the compilers be a bad thing? If I can get the benefits of the client compiler for startup and footprint and the optimization of the server compiler, i'm all for it!
    The teams too competing stratigies to achieve their optimizations. Many of them are not compatible. For example, the stratigies to know when to compile a method, inline it and the code to support these stratigies cannot live together. Having seperate teams, allowed them to try different thing, explore different ideas, feed off of each other and cross-pollinate. Having teams compete to build products is a tactic used by many companies (including MS). It helps build best of bred.
    And Sun's is certainly not slowing down, with 1.5 they gave us Parallel and Concurrent GCs!

    Err.. these garbage collectors are in the 1.4. Research hasn't stopped that is for sure, it continues. What we are talking about is a reduction, not stopage. For example, are we really sure that adaptive sizing will translate into better performance in production. I've not evidence to make a decision.
  167. micro benchmarking[ Go to top ]

    answer.com says:- misinform: To provide with incorrect information.Where does it say "mis-lead"?
    As I said, misinformation implies a deliberate attempt which is different then a mistake.

    Kirk,
     You are thinking of disinformation. http://dictionary.reference.com/search?q=disinformation

     If you look at the begining of his sentence in the original statement -
    I don't recall calling Heinz's newsletter 'propaganda',
    - he clears up any misunderstanding of implied intent.

    The word misinformation doesn't imply anything more than what is defined. Someone might read into it or have been taught that misinformation == disinformation. But that doesn't actually change

    If I say "I was misinformed", was I intentionally lied to? I doubt if anyone would think that (not many at least). And since misinformed and misinformation come from the same root word ... .
  168. micro benchmarking[ Go to top ]

    oops - But that doesn't actually change real meaning of the word.
  169. micro benchmarking[ Go to top ]

    answer.com says:- misinform: To provide with incorrect information.Where does it say "mis-lead"?
    As I said, misinformation implies a deliberate attempt which is different then a mistake.
    Kirk,&nbsp;You are thinking of disinformation. http://dictionary.reference.com/search?q=disinformation&nbsp;If you look at the begining of his sentence in the original statement -
    I don't recall calling Heinz's newsletter 'propaganda',
    - he clears up any misunderstanding of implied intent. The word misinformation doesn't imply anything more than what is defined. Someone might read into it or have been taught that misinformation == disinformation. But that doesn't actually change If I say "I was misinformed", was I intentionally lied to? I doubt if anyone would think that (not many at least). And since misinformed and misinformation come from the same root word ... .

    Can we assume that I am not deliberately spreading misinformation, and that I am attempting to serve the Java community by writing on subjects that should interest Java Specialists?

    Question: where am I unintentionally spreading misinformation in my "newsletter"?

    Some of the newsletters are written very quickly (30 minutes due to project deadlines) others take me two days. I try my best to check all the things that I write with experiments, but there is no doubt that errors can and do creep in and I am very happy to offer corrections.

    If "The Java Specialists' Newsletter" is publicly accused of spreading misinformation, I would at least like the opportunity to correct that perception.

    Kind regards

    Heinz
    --
    http://www.javaspecialists.co.za
  170. micro benchmarking[ Go to top ]

    answer.com says:- misinform: To provide with incorrect information.Where does it say "mis-lead"?
    As I said, misinformation implies a deliberate attempt which is different then a mistake.
    Kirk,&amp;nbsp;You are thinking of disinformation. http://dictionary.reference.com/search?q=disinformation&amp;nbsp;If you look at the begining of his sentence in the original statement -
    I don't recall calling Heinz's newsletter 'propaganda',
    - he clears up any misunderstanding of implied intent. The word misinformation doesn't imply anything more than what is defined. Someone might read into it or have been taught that misinformation == disinformation. But that doesn't actually change If I say "I was misinformed", was I intentionally lied to? I doubt if anyone would think that (not many at least). And since misinformed and misinformation come from the same root word ... .
    Can we assume that I am not deliberately spreading misinformation, and that I am attempting to serve the Java community by writing on subjects that should interest Java Specialists?Question: where am I unintentionally spreading misinformation in my "newsletter"?Some of the newsletters are written very quickly (30 minutes due to project deadlines) others take me two days. I try my best to check all the things that I write with experiments, but there is no doubt that errors can and do creep in and I am very happy to offer corrections.If "The Java Specialists' Newsletter" is publicly accused of spreading misinformation, I would at least like the opportunity to correct that perception.Kind regardsHeinz--http://www.javaspecialists.co.za
    Heinz,
     My comments were solely on the incorrect correction of Kirk on the word misinformation. I wasn't commenting on the validity of "d taye"'s comments in regard to your newsletters.

     Your comments and articles are much appreciated. We all make mistakes and should be allowed to correct them (I used to think that MS stuff was the best thing since sliced bread :) ).

    mark
  171. micro benchmarking[ Go to top ]

    answer.com says:- misinform: To provide with incorrect information.Where does it say "mis-lead"?
    As I said, misinformation implies a deliberate attempt which is different then a mistake.
    Kirk,&amp;nbsp;You are thinking of disinformation. http://dictionary.reference.com/search?q=disinformation&amp;nbsp;If you look at the begining of his sentence in the original statement -
    I don't recall calling Heinz's newsletter 'propaganda',
    - he clears up any misunderstanding of implied intent. The word misinformation doesn't imply anything more than what is defined. Someone might read into it or have been taught that misinformation == disinformation. But that doesn't actually change If I say "I was misinformed", was I intentionally lied to? I doubt if anyone would think that (not many at least). And since misinformed and misinformation come from the same root word ... .
    Can we assume that I am not deliberately spreading misinformation, and that I am attempting to serve the Java community by writing on subjects that should interest Java Specialists?Question: where am I unintentionally spreading misinformation in my "newsletter"?Some of the newsletters are written very quickly (30 minutes due to project deadlines) others take me two days. I try my best to check all the things that I write with experiments, but there is no doubt that errors can and do creep in and I am very happy to offer corrections.If "The Java Specialists' Newsletter" is publicly accused of spreading misinformation, I would at least like the opportunity to correct that perception.Kind regardsHeinz--http://www.javaspecialists.co.za

    For the record, I don't beleive you're "deliberately spreading misinformation", and that was not what I meant when I used the word "misinform". Kirk has it wrong.

    I do beleive you should check your facts (or have them checked by reviewers) before putting out your newsletters/articles.
  172. micro benchmarking[ Go to top ]

    I do beleive you should check your facts (or have them checked by reviewers) before putting out your newsletters/articles.

    I try to have them checked when possible. For example, when I investigated the new hashing in JDK 1.4, I sent questions to Joshua Bloch, which he very kindly answered.

    At the very least, it would be great if someone could check my English grammar and spelling, since my mother tongue is German and my essays in school always came back with more red than blue ink!

    I've done some soul searching since your comments, to try to find where I might have sent incorrect facts in my newsletter. I have had lots of positive feedback: http://www.javaspecialists.co.za/quotes.html and you are the first person to suggest that I am propagating incorrect facts about Java. I have had commercial tool vendors complaining to me that the level of information distributed for free would affect their business :)

    I went through the 100+ newsletters that I published under my brand in the last 4.5 years, and have found 3 where I know that there were mistakes:

    Due to a compiler bug in JBuilder 3:
    http://www.javaspecialists.co.za/archive/Issue013a.html

    Mistake in my test harness (already mentioned):
    http://www.javaspecialists.co.za/archive/Issue070.html

    Mistake in assuming deadlock was "THE" Swing Deadlock:
    http://www.javaspecialists.co.za/archive/Issue101.html

    Besides those three, I simply do not know what other mistakes have crept in? Please tell me so that I can correct them!

    (I must admit, I am surprised myself that I only found 3 mistakes in the last 4.5 years of writing advanced Java newsletters and NOT having reviewers... there simply MUST be more ;-)

    So d taye, here is my invitation to join my board of reviewers :-) I would be very pleased if you would accept the invitation.

    Perhaps we should take this discussion offline? Please pop me an email to heinz at javaspecialists dot co dot za so we can discuss the reviewer process in detail.

    Kind regards

    Heinz
    --
    http://www.javaspecialists.co.za
  173. micro benchmarking[ Go to top ]

    we are talking about code that should be stable.

    stable means very little or no new features, and I wouldn't want that (not that I don't want stability either, but I can certainly engineer around the it). If you want stable follow what Sun says, upgrade but only within the family (1.4.2 to 1.4.2_0x).
    But nor should a faster CPU slow an I/O bound appliation down.

    But it could! I had a situation with 5.0 where performance degraded because the mark-sweep GC couldn't keep up with the compiled code, code was simply being compiled too fast forcing GCs.
    The teams too competing stratigies to achieve their optimizations. Many of them are not compatible. For example, the stratigies to know when to compile a method, inline it and the code to support these stratigies cannot live together. Having seperate teams, allowed them to try different thing, explore different ideas, feed off of each other and cross-pollinate. Having teams compete to build products is a tactic used by many companies (including MS). It helps build best of bred.

    Duplication of effort is bad. For Sun it could possibly mean wasting resources that could otherwise have gone to improving one code base, for users like me it means more headache in terms of configuration, deployment and support. On the other hand for consultants, it means more $$$. Ever asked why IBM or BEA don't expose their options to users? Is IBM's VM or "client" or a "server" VM? Should users care?
  174. micro benchmarking[ Go to top ]

    Duplication of effort is bad. For Sun it could possibly mean wasting resources that could otherwise have gone to improving one code base, for users like me it means more headache in terms of configuration, deployment and support. On the other hand for consultants, it means more $$$. Ever asked why IBM or BEA don't expose their options to users? Is IBM's VM or "client" or a "server" VM? Should users care?

    Humm, having best of bred in term of JVM optimizations *IS* improving the code base. Using your arguement, we should all be using a single J2EE application server (maybe Sun's reference implementation?) as IBM, BEA, Orion, Oracle... etc.. (sorry for the dozens of tothers that I forgot or left out) have all been wasting their time.

    I'm not sure why you're attacking consultants. Employees also do this type of work!

    IBM and BEA has yet other tactics in thier attempts to improve performance. They are also based on working in a "competitive" environment. As for should you care if it's -server or -client? I'm not sure! However I do know that the huristics needed to make theses decisions are difficult to encode and as such are left to technical experts (such as yourself?). It just means that we don't know to encode the rules and that's ok. Unisys as well as a number of other research groups are working on figuring that out. But don't worry for your little job. Answering those types of questions will still leave plenty of others for you to mull over.
  175. Garbage Collection[ Go to top ]

    Apart from the fact that performance benchmarks only lead to meaningless "My dad is stronger than yours" discussions (I would argue that productivity, ease-of-use and robustness are more important aspects of a platform), I'm a bit curious about what impact garbage collection had on this benchmark and how well the .Net runtime handles garbage collection in general.

    At least from my knowledge, garbage collection and/or the non-trivial task of adjusting the JVM to minimize its impact tends to be a major issue as system load increases. Unacceptable freeze-ups and even crashes tend to occur in Java-based systems when a full GC is running and I have yet to see a parallell GC configuration that avoids these problems (if someone has, please post your command line :).

    Since this is to expected in a resource-intensive operation like this, it would be very interesting to see how Microsoft solved this, both in terms of behaviour under high system load and implementation-wise. Anyone?
  176. Any ideas . . .
  177. Comparison[ Go to top ]

    The testing should not be restricted to some particular
    areas or tasks and say that .net is good than java. The test cases should be put under public, get their ideas and collect all the possible test cases in all grounds and then
    try to do the comparision.
  178. So now when everything has calmed down and people have stopped calling Greg and MS names and nobody has posted for a while, how is it that this after all in any case seems to be a problem then?

    Big Jar, Slow Startup - Help !!!!

    Regards
    Rolf Tollerud
    (excuse me that exist!)
  179. I hope you have read and understand product documentation and you can help to solve this kind of problems on JAVA forums yourself.
  180. Can you answer this then Juozas, but if you come with even the tiniest snide remark I will hunt you for 10 years! There seems than not many know or use the "jarindex"? (and yet it was introduced already in jdk 1.2).

    The question is, once you know in which jar file the class is, how do you extract it? I found,


    public class JarClassLoader extends MultiClassLoader {
    private JarResources jarResources;
    public JarClassLoader (String jarName){
    // Create the JarResource and suck in the jar file.
    jarResources = new JarResources (jarName);
    }
    protected byte[] loadClassBytes (String className){
    // Support the MultiClassLoader's class name munging facility.
    className = formatClassName (className);
    // Attempt to get the class data from the JarResource.
    return (jarResources.getResource (className));
    }
    }


    But that doesn't help any does it? So how is it loaded? I know it is in java.util.zip somewhere but I have other things to do. Considering that nobody in the link I gave you was able to help I should not be bashed too hard for not knowing IMO, not being a java programmer.

    Regards
    Rolf Tollerud
  181. See JarFile class http://java.sun.com/j2se/1.4.2/docs/api/java/util/jar/JarFile.html and JarInputStream http://java.sun.com/j2se/1.4.2/docs/api/java/util/jar/JarInputStream.html
  182. BTW bytecode generation and verification, random number generator seed, domain name resolving, XML metadata parsing take more time than lazy unjaring on container startup. This stuff is tunable too.
  183. Adn it recommended to split "native" executables to dynamic loadable libraries too (I used to be win32 progammer). OS manages this stuff, it can load/unload share core and resources. There are many ways and algorythms to optimize this stuff, it is better to read docs than to discuss it there is no the best way to optimize for all use cases.
  184. Rolf, where is this code from? I went to the Sun sources to look for "JarClassLoader," found no reference in the 1.5 or 1.4.2 source.

    I did look at JarFile.java, in java/util/jar, and it clearly refers to a jar file as a random access file, which doesn't require a full load to access a given entry.
  185. Hi Joseph,
     
    The links I got from Juozas was useful, but showed only how to extract a class to a file. BTW zip and jar classes do not provide truly random access but there is dir info at the end that can be used for "faster then sequential Access".

    With the information in the JarResources
     example, I can construct a classloader that does not load the whole jar file but it appears a heavy process, how is it done in 1.5 for real? I want to benchmark JarResources system against the Sun URLClassLoader system, how it works I still dont know and neither does the Java persons around me.

    Regards
    Rolf Tollerud
  186. Hi Joseph, &nbsp;The links I got from Juozas was useful, but showed only how to extract a class to a file. BTW zip and jar classes do not provide truly random access but there is dir info at the end that can be used for "faster then sequential Access". With the information in the JarResources&nbsp;example, I can construct a classloader that does not load the whole jar file but it appears a heavy process, how is it done in 1.5 for real? I want to benchmark JarResources system against the Sun URLClassLoader system, how it works I still dont know and neither does the Java persons around me.RegardsRolf Tollerud

    Rolf, JarResources has nothing to do with the core Java library. It's a sample provided by JavaWorld.

    ZipFile - the parent class of JarFile - uses a native call to handle loading of a specific entry. While it's possible that such calls can sequentially search an entire file to pull up a specific indexed entry, somehow I think that's outside the boundaries of reason for any decent JVM.

    Further, I think such claims based on investigation of a non-core classloader is outside acceptable limits of debate. Try again, except use Java this time.
  187. I rest my case[ Go to top ]

    "Rolf, JarResources has nothing to do with the core Java library. It's a sample provided by JavaWorld"

    Of course I know that. What I though was that maybe 1.5 was doing something similar.

    "While it's possible that such calls can sequentially search an entire file to pull up a specific indexed entry, somehow I think that's outside the boundaries of reason for any decent JVM"

    Still that is how it is done.

    "Further, I think such claims based on investigation of a non-core classloader is outside acceptable limits of debate"

    Ok, end of discussion.

    Regards
    Rolf Tollerud
  188. I rest my case[ Go to top ]

    You rest your case, declaring an end to the discussion? That's incorrect behaviour, too.

    "That is how it is done" is appropriate for the JarResources file you referred to. You "thought" that maybe 1.5 was doing something similar, evidence you haven't investigated much. If you had investigated, you'd know.

    Please do some basic research and logical thought. It's ridiculous to think that an indexed file uses sequential access to get a specific entry.

    Further, I pulled down the J2SE 6 sources to check the Zip file handling. Guess what? It uses random access to hit the entry directly.

    Sorry.
  189. Hmm[ Go to top ]

    Joseph,
    "You rest your case, declaring an end to the discussion? That's incorrect behaviour, too"

    I thought you wanted me to shut up.

    "Further, I pulled down the J2SE 6 sources to check the Zip file handling. Guess what? It uses random access to hit the entry directly

    How can it be random access without a proper index? But if you are saying that J2SE 6 (Mustang) does it in a differently way isn't that the same as saying there was some shortcomings in the previous version?

    Regards
    Rolf Tollerud
  190. Hmm[ Go to top ]

    Joseph,"You rest your case, declaring an end to the discussion? That's incorrect behaviour, too"I thought you wanted me to shut up."Further, I pulled down the J2SE 6 sources to check the Zip file handling. Guess what? It uses random access to hit the entry directlyHow can it be random access without a proper index? But if you are saying that J2SE 6 (Mustang) does it in a differently way isn't that the same as saying there was some shortcomings in the previous version?RegardsRolf Tollerud

    I haven't asked you to shut up. I've asked you to use facts, real ones... or nothing at all.

    ZIP files have an index. It reads the index. Then it goes to where the file is... via random access. I'm not suggesting Mustang does it differently, and I'd be surprised if it did; Mustang was simply the J2SE source I had at hand.

    ZIP files are pretty common; it's hard to get away with incorrect access of them. Java does it as efficiently as any other ZIP usage mechanism. If you can show otherwise, please do. So far, you haven't.
  191. I want out![ Go to top ]

    I guess you dont just want me to quit do you? I have to say "you were right and I was wrong". I am really pressed for time so why don't you give me the filename and rownumber? So maybe I can accommodate you.

    But what was the reason for the problem? Big Jar, Slow Startup - Help !!!!

    Regards
    Rolf Tollerud
  192. I want out![ Go to top ]

    Rolf, all I care about is that you stop saying something intentionally incorrect.

    But since you asked: line 828 of j2se/src/share/native/java/util/zip/zip_util.c is where it locates the entry by the hash key matched to the index. This is referred to by ZipEntry.c in the same directory, which is referred to by ZipEntry.java in the runtime library, which is referred to by ZipFile.java, the superclass of JarFile.

    I went to the google ref, and noted that it's referring to something on PocketPC - and all the answers are aimed at a much larger environment. I don't know why, given the data so far, that particular user's jar is slow; perhaps he's loading over the network, perhaps his startup procedure makes specific reference to a lot of classes, perhaps his PocketPC is slow. Perhaps the JVM for his PocketPC is slow.

    Like I say, I don't know.

    However, when *I* run large java apps on my laptop, it doesn't take a long time to start things - two, three seconds, perhaps, including JVM startup? While my data's just as anecdotal as our poor PocketPC friend's data is, I think the existence of controverting data indicates that you can't just blame a given problem on the reading of ZIP files. Let's wait until the poor fellow comes out and says "I found the problem" before saying what the problem MIGHT have been.
  193. I want out![ Go to top ]

    And I suppose it's too much to ask that you use as supporting evidence a thread that was, like, newer than MAY 2003.
  194. Below is the code you are refering to (row 800-911 from zip_util.c). Do you call that random access? This file together with ZipFile.java use every trick in the book, memory mapped files, NIO, etc.

    I can admit that it is not sequentially access but it's noway as fast reading a normal indexed random access file with regular records.

    I also found a useful page that shows that it is not at all that easy to read a zip/jar file. Check,
    http://zziplib.sourceforge.net/zzip-parse.html

    Let's call it a close now. You (and the others) were partly right.

    Regards
    Rolf Tollerud


    /*
     * Returns the zip entry corresponding to the specified name, or
     * NULL if not found.
     */
    jzentry *
    ZIP_GetEntry(jzfile *zip, char *name, jint ulen)
    {
        unsigned int hsh = hash(name);
        jint idx = zip->table[hsh % zip->tablelen];
        jzentry *ze;

        ZIP_Lock(zip);

        /*
         * This while loop is an optimization where a double lookup
         * for name and name+/ is being performed. The name char
         * array has enough room at the end to try again with a
         * slash appended if the first table lookup does not succeed.
         */
        while(1) {

            /* Check the cached entry first */
            ze = zip->cache;
            if (ze && strcmp(ze->name,name) == 0) {
                /* Cache hit! Remove and return the cached entry. */
                zip->cache = 0;
                ZIP_Unlock(zip);
                return ze;
            }
            ze = 0;

            /*
             * Search down the target hash chain for a cell whose
             * 32 bit hash matches the hashed name.
             */
            while (idx != ZIP_ENDCHAIN) {
                jzcell *zc = &zip->entries[idx];

                if (zc->hash == hsh) {
    /*
    * OK, we've found a ZIP entry whose 32 bit hashcode
    * matches the name we're looking for. Try to read
    * its entry information from the CEN. If the CEN
    * name matches the name we're looking for, we're
    * done.
    * If the names don't match (which should be very rare)
    * we keep searching.
    */
                    ze = newEntry(zip, zc, ACCESS_RANDOM);
                    if (ze && strcmp(ze->name, name)==0) {
                        break;
                    }
                    if (ze != 0) {
                        /* We need to release the lock across the free call */
                        ZIP_Unlock(zip);
                        ZIP_FreeEntry(zip, ze);
                        ZIP_Lock(zip);
                    }
                    ze = 0;
                }
                idx = zc->next;
            }

            /* Entry found, return it */
            if (ze != 0) {
                break;
            }

            /* If no real length was passed in, we are done */
            if (ulen == 0) {
                break;
            }

            /* Slash is already there? */
            if (name[ulen-1] == '/') {
                break;
            }

            /* Add slash and try once more */
            name[ulen] = '/';
            name[ulen+1] = '\0';
            hsh = hash_append(hsh, '/');
            idx = zip->table[hsh % zip->tablelen];
            ulen = 0;
        }

        ZIP_Unlock(zip);
        return ze;
    }
  195. Rolf, as I've said in email and here, I've little interest in banning you.

    Nobody said it was "easy" - at least, I haven't. But ZIP files obviously aren't regular records to index. And the use of memory-mapped files itself has implications for how loads are done - especially dependency on the efficiency of a given OS for the algorithms. Furthermore, as this is native code, some mechanisms may differ across VMs.

    All of this goes directly toward what I was saying. You're making a claim; it's not verifiable; it's fairly easy to show how it's incorrect; please stop.
  196. Below is the code you are refering to (row 800-911 from zip_util.c). Do you call that random access?

    It hurts. It hurts to try to explain basic C code to someone who is determined to prove everyone wrong.

    The code you posted is a typical linked list traversal with an optimization for comparison based on an integer hash code.

    It performs no I/O whatsoever, sequential or random. It is walking the structure that someone referred to as an "index" structure.
    This file together with ZipFile.java use every trick in the book, memory mapped files, NIO, etc.I can admit that it is not sequentially access but it's noway as fast reading a normal indexed random access file with regular records.

    There is no such thing as "reading a normal indexed random access file with regular records". You appear to be completely uneducated about how I/O works. Let me give you a brief course:

    Sequential I/O has two conceptual methods: readNextByte() and writeNextByte(byte).

    Random I/O for has two conceptual methods: readByteAt(offset) and writeByteAt(offset, byte).

    That's it. Done.
    Let's call it a close now. You (and the others) were partly right.

    You took the conversation way off topic, you were completely technically incorrect, and you made a bunch of baseless accusations that you should be apologizing for.

    No skin off our backs, though .. we've been dealing with the same pattern for a couple years now.

    Peace,

    Cameron Purdy
    Tangosol Coherence: Clustered Shared Memory for Java
  197. You took the conversation way off topic, you were completely technically incorrect, and you made a bunch of baseless accusations that you should be apologizing for.No skin off our backs, though .. we've been dealing with the same pattern for a couple years now.Peace,Cameron PurdyTangosol Coherence: Clustered Shared Memory for Java

    My concern is that if the pattern follows its usual course, there will appear a post in some thread sometime soon that claims that "Java is slow because it loads a lot of unnecessary library code", and the whole thing will start again, in a similar way to the regular "No serious applications use EJB" posts from the same source.
  198. Below is the code you are refering to (row 800-911 from zip_util.c). Do you call that random access?
    It hurts. It hurts to try to explain basic C code to someone who is determined to prove everyone wrong.The code you posted is a typical linked list traversal with an optimization for comparison based on an integer hash code.It performs no I/O whatsoever, sequential or random. It is walking the structure that someone referred to as an "index" structure.
    This file together with ZipFile.java use every trick in the book, memory mapped files, NIO, etc.I can admit that it is not sequentially access but it's noway as fast reading a normal indexed random access file with regular records.

    There is no such thing as "reading a normal indexed random access file with regular records". You appear to be completely uneducated about how I/O works.
    Let me give you a brief course:
    Sequential I/O has two conceptual methods: readNextByte() and writeNextByte(byte).
    Random I/O for has two conceptual methods: readByteAt(offset) and writeByteAt(offset, byte).
    That's it. Done.
    Let's call it a close now. You (and the others) were partly right.
    You took the conversation way off topic, you were completely technically incorrect, and you made a bunch of baseless accusations that you should be apologizing for.No skin off our backs, though .. we've been dealing with the same pattern for a couple years now.Peace,Cameron PurdyTangosol Coherence: Clustered Shared Memory for Java

    You're much too kind to explain things to Rolf. It takes what 5 minutes to type "java.io" into google and then see exactly what you just said in the post. Or, one could easily type "Systems.Io" and see the same thing in .NET.

    peter
  199. Cameron,

    My claim from the beginning was "that the jar files are always was read into memory". Since the system uses mmap files that must be true. Somehow we got into the subject of how the zip/jar was read "to avoid being read into memory". I don't care what you call the system that’s reads the zip-files. It is is a complete mess. To call it random access is a joke.

    Anyhow I will take a six months break. I expect that in that time even the most fanatical zealot will realize that Java are a little down-at-the-heel .

    bye bye for now
    Rolf Tollerud
  200. Cameron, My claim from the beginning was "that the jar files are always was read into memory". Since the system uses mmap files that must be true. Somehow we got into the subject of how the zip/jar was read "to avoid being read into memory". I don't care what you call the system that’s reads the zip-files. It is is a complete mess. To call it random access is a joke.Anyhow I will take a six months break. I expect that in that time even the most fanatical zealot will realize that Java are a little down-at-the-heel .
    bye bye for now
    Rolf Tollerud

    I explained clearly you need not incure the cost of loading the entire jar into the same VM, therefore you're claims are absolutely incorrect. Unzipping a jar file using a separate process is a standard practice, so I'm puzzled that anyone would claim loading a jar file is a huge memory/performance issue. You do it once and it's done. If you need to instantiate a class, it will have to be read into memory obviously, the same is true of every single platform from C, C++, Java, to C#.

    peter
  201. Cameron, My claim from the beginning was "that the jar files are always was read into memory". Since the system uses mmap files that must be true.

    No, this is false. Just because a file is memory-mapped does not mean that the entire thing is actually read into memory.

    From the freebsd-java mailing list:

    http://lists.freebsd.org/pipermail/freebsd-java/2003-December/001352.html

    "Note that java will mmap any .jar files you use, so
    things like rt.jar will add 30MB to your process size right off the bat. Very little of it will actually get paged in from disk, though, and what little does get paged in will be shared across all java processes."
  202. Cameron, My claim from the beginning was "that the jar files are always was read into memory".

    Your claim was wrong.
    Since the system uses mmap files that must be true.

    Since the system uses memory mapped files, it would not be true. Again, your claim is wrong.
    Somehow we got into the subject of how the zip/jar was read "to avoid being read into memory".

    I don't know what you are talking about. We never got into a discussion about how something was read to avoid it being read.
    I don't care what you call the system that's reads the zip-files. It is is a complete mess.

    I don't care what it is called either. I gave you a conceptual lesson .. it is a shame that you did not learn anything from it.

    However, for you to judge something as a complete mess while failing to understand the very concept of random I/O is a very funny joke.

    Ha ha.

    Ha.

    ("a" pronounced as in the "o" in "why bother?")
    To call it random access is a joke.

    To call it random access is correct.

    The fact that you refer to it as a joke is a joke.
    Anyhow I will take a six months break.

    Woohoo!
    I expect that in that time even the most fanatical zealot will realize that Java are a little down-at-the-heel .

    I don't know what that means.

    However, if you are trying to say that Java is showing its age and is in need of a face lift, then you'll find that most people will agree with you.

    However, most of us say it in a constructive manner, because we want to see Java improve. You say it in a derogatory manner, for whatever reason.

    Peace,

    Cameron Purdy
    Tangosol Coherence: Clustered Shared Memory for Java
  203. However, most of us say it in a constructive manner, because we want to see Java improve. You say it in a derogatory manner, for whatever reason.Peace,Cameron PurdyTangosol Coherence: Clustered Shared Memory for Java
    Excuse me Cameron. While I am no supporter of Rolf, do you mean to say you have made no derogatory remarks about Microsoft or whatever is coming out their stables? Please. Constructive criticism should be universally directed. I have seen you making inflamatory remarks in the past at TSS.com about .NET and ending it with that silly "peace" thing. Please practice what you preach.
  204. However, most of us say it in a constructive manner, because we want to see Java improve. You say it in a derogatory manner, for whatever reason.Peace,Cameron PurdyTangosol Coherence: Clustered Shared Memory for Java
    Excuse me Cameron. While I am no supporter of Rolf, do you mean to say you have made no derogatory remarks about Microsoft or whatever is coming out their stables? Please. Constructive criticism should be universally directed. I have seen you making inflamatory remarks in the past at TSS.com about .NET and ending it with that silly "peace" thing. Please practice what you preach.
    Too much exposure to TSS.com is starting to affect my judgemeent. In retrospect this comment about Cameron is a little too harsh and gives the impression of putting Cameron and Rolf on the same league -- which can obviously never be the case. My sincere apologies to Cameron.
  205. let's see the job-market in 6 months[ Go to top ]

    Anyhow I will take a six months break. I expect that in that time even the most fanatical zealot will realize that Java are a little down-at-the-heel.

    Or in that time, quality of postings would have improved a lot.
  206. Anyhow I will take a six months break.
    Ok, now that we managed to get Rolf away until next week, can we get back on topic please?

    Sorry, couldn't resist the joke. On a more serious note, let's evaluate during these hours, I mean, months if post quality improves or what. ;)
  207. Anyhow I will take a six months break.
    Ok, now that we managed to get Rolf away until next week, can we get back on topic please?Sorry, couldn't resist the joke. On a more serious note, let's evaluate during these hours, I mean, months if post quality improves or what. ;)
    Oh it will improve -- you can bet on it. Of what use is a topic without some religion thrown into the mix to spice things up. However with Rolf out of the picture, you wouldn't see stuff like a web services benchmark post turning into how a class loader loads .jar files.
  208. Anyhow I will take a six months break.
    Ok, now that we managed to get Rolf away until next week, can we get back on topic please?Sorry, couldn't resist the joke. On a more serious note, let's evaluate during these hours, I mean, months if post quality improves or what. ;)

    I think it would be helpful to work out some general strategy for dealing with what some of us may consider FUD. My preference is for some sort of indexed 'debate archive', so that if highly controversial points of view are posted someone could reply with a standard phrase such as: 'here is a link to a recent in-depth discussion of this matter. Please read this discussion before posting further.' I'm not implying that this should stop debate; just that it might prevent the tedious re-playing of the same arguments (or, at the very least, re-playing of the same arguments would then become obvious). This archive could cover everything from the absurd 'No high-performance commercial sites use EJB' arguments that I'm sure many of us are far too familiar with, to more reasonable points often raised regarding Java performance, or the way ORM works, and so on. Perhaps what is needed is a TSS FAQ (Frequently argued Questions) facility.
  209. But that doesn't help any does it? So how is it loaded? I know it is in java.util.zip somewhere but I have other things to do. Considering that nobody in the link I gave you was able to help I should not be bashed too hard for not knowing IMO, not being a java programmer.RegardsRolf Tollerud

    Is it a question or a taunt?

    Regarding the Zip file, it's all a question of the algorithm that you use. i don't think that this is area of discussion of compression, you should maybe go to related newsgroups.

    In general you achieve better compression ratios if you consider the entire set of files you're compressing.
    I have not had a look at the WinZip implementation, but basically you have two choices:
    1/ Tar the set of files into one big file and compress it.
    2/ Compress each file individually.

    If you go for the first option, accessing the nth file requires reading the entire stream to the end of the nth file. If that file is at the end of the stream, you load the whole thing.

    If you where able to retrieve a single file without reading the entire zip file, then you should be able to destroy the contents of the zip file before and after (putting in a whole bunch of zeros for example).
    Of course, a simple test with winzip will fail because it does a checksum (remember those file corrupt after downloading a zip file?)

    However, if theory of certain people is correct then you could be able to extract that class file with the methods that are provided by the JDK.

    My hunch is that you simply can't extract a class without at least reading the stream that preceeds the file you need.

    Regards,

    Cédric

    ps: The discussion of Java/.Net versus the rest of the world is a matter of productivity and efficiency. The performance is a no brainer, who cares? Getting an application out there that works is more important then anything else

    pps: Regarding the memory is cheap trend, totally off-topic, but the none the less I will respond. I work for a company that has over 90 000 employees.
    Each employee was given a desktop (Pentium II with 64 MB of RAM runnning on NT) a few years ago.
    In 2005, we all upgrade to 1GB of RAM to run java applications based on the statement that memory is cheap. I invite all those who believe this point to come over and upgrade everybody's machine for peanuts and you will be my champion.
  210. This fight is getting to me (though it's fun sometiomes).
    I have only worked with Java in my years in school and for about two wears i'm working on .NET. Not because i wanted to but because that's what i have been offered. In the end it's what brings you the money that matters.
    I have never regreted working in .NET, it's easy and you can bring real advantages to your clients (perhaps it would have gone the same if i worked in Java).
    As to these benchmarks i can tell from experience that they will never determine how your application will work in real life conditions. I suspect .NET WS are better, but in the end it's the tools that get job done that mattter.

    In resepct to C/C++ code vs managed code. Well i strongly beleive that managed code will eventualy win. Just take a look at : Singularity . Though has many disadvantages and will not become commercialy viabile any time soon it's a bold project. (They resolved the microkernel strugle of switching contexts just by taking advantage of managed code).

    As for the statement that no serious application is developed in .NET I must strongly say that is not true. Though I could give you plenty of examples (Visa and MasterCard in Italy , Brazilian equivalent of the Social Security etc ), I can tell you than in the last four months I have worked in developing some very comppelling applications. One that makes stock management for all branches of BAT in Romania and uses .NET Remoting. The other is for Nestle Romania that knows how to talk to AS400 and makes all kinds of stuff with OLAP and other bussiness inteligence. That is not even four month for 2 big projects. So pls gimme a break.

    Respect
    bogdanutz
  211. On the one hand, I have to laugh, but on the other hand, it kind of makes me sick to my stomach...

    M$ have turned almost every Java forum here into a circus.

    The Java hating .NET pushers are out in force.

    So what are we to do?

    Should we be spending our time in the .NET forums telling the .NET heads how wonderful J2EE is and how our lives suddenly got better when we made the switch from .NET to J2EE? LOL.

    That would be stooping down to their level, which is something I wont do.

    Maybe we need to find another discussion forum? TheServerSide has de-generated into an advertisment for .NET.

    Good job M$. You've only deepened my hatred of all things M$.
  212. Arguing with "Microsoft is always best" Rolf is a waste of time. He is like a politician who talks in circles.

    He states that .NET is faster than Java in every case study benchmark he has ever seen. This despite the fact that I have already pointed out to him on multiple occasions that in the latest PetStore benchmark posted on gotdotnet.com (a MS owned website) that J2EE (using EJB) has a higher average throughput than .NET.

    As soon as he is proven wrong he just jumps to the next topic and starts all over again. He posts nonsense based on incorrect assumptions over and over and over again. By the time he is finished 50%+ of many thread consist entirely of Rolf posting nonsense and others proving him wrong.
  213. XML parser comparison[ Go to top ]

    FYI, here is an article comparing the performance of different XML models and parsers:

    http://www-106.ibm.com/developerworks/xml/library/x-injava/index.html

    HTH
    Kit