Benchmark Analysis: Guice vs Spring from JInspired


News: Benchmark Analysis: Guice vs Spring from JInspired

  1. William Louth wrote "Benchmark Analysis: Guice vs Spring," because "the benchmark referred to in the article as a modified version of Crazy Bob’s Semi Useless Benchmark is interesting not in terms of the results but in the analysis of the results." He has applied and documented his standard analysis methods to dig up better results. Some of the analysis extracted from the article (which has a lot of graphs showing technique and actual runtime, where Guice generally was faster than Spring....)
    What immediately struck me about the data was the change in ordering of singleton and non-singleton tests across Spring and Guice. With Guice the singleton tests were always slower than the non-singleton tests which is the complete opposite to Spring and what you would expect. I then noticed that this difference was much more prominent when executed concurrently. Spring (CS) was 13.696 seconds compared with Guice’s (CS) 19.544 seconds - approximately a 6 second difference.
    It's a fascinating look at cause analysis within both Spring and Guice - and also shows a consideration when using JUnit. :)

    Threaded Messages (13)

  2. Who cares?[ Go to top ]

    This is bootstrap/initialization timing isn't it? So WTF cares? What matters, which is easier, more scalable, more maintainable. -- Bill Burke
  3. Re: Who cares?[ Go to top ]

    This is bootstrap/initialization timing isn't it? So WTF cares? What matters, which is easier, more scalable, more maintainable.

    Bill Burke
    I should actually read the article before posting ;-)
  4. Perspective[ Go to top ]

    Spring did beat Guice in the "singleton" test which basically tests how fast the framework can return an existing object when you go through the API. Basically, it's no more complicated than: final Singleton s = ...; ... return s; Guice comes out a little slower because it creates a couple objects. Sure, creating a couple small objects looks expensive compared to return s, but it's not actually expensive in the scheme of things (not like Spring is when it comes to creating new objects ;)). Also, Guice only creates those objects once per object graph, not per singleton. In this case, the entire "object graph" consists of one singleton, so the overhead of calling into the injector dwarfs "creating" the object graph.
  5. Re: Perspective[ Go to top ]

    Hi Bob, I am re-posting my replies on my blog here. You are right that this is not representative of reality but that could be said for the whole benchmark itself. For a standard enterprise application the interaction with local and resource managers will dwarf any overhead Spring or Guice add. The point of my article was to show the issues with creating and interpreting such benchmarks. It was also to point out the kind of investigative process that we should expect from anyone posting performance numbers comparing one product with another - resource metering!!! By the way when tuning our AspectJ Probes extension we had a similar construct, Closure, that was created repeatedly for each @Before callback. Because the Closure was already associated with an object specific to a thread (no contention), Context, we were able to use a free list (pool) of reusable closures within the context to reduce our times in a benchmark significantly (we are talking about nano-benchmarking). Note we were not tuning for a particular case, the benchmark, because our Probes did fire that frequently when instrumentation was applied to a large code base. This is not a Guice vs Spring war. If it was then I am on the side of the hand coded factory that outperformed both products. I thought it was extremely relevant to in light of the recent set of articles appearing on TSS related to performance management and performance testing. Kind regards, William
  6. Re: Perspective[ Go to top ]

    I would also like to add that the analysis also demonstrated the inter-play between object allocation, GC, and thread monitoring contention (blocking). The additional object creation for each single instance call on Guice resulted in not only a performance degradation because of increased GC but also because of increased thread monitor contention - the time in the synchronized code block for the lock owner was lengthened because of the object creation and to some degree (though not proven in the article) increased GC counts and time. So at this stage you must be asking were is the synchronized block? Well I did not get time to make a switch back into my investigative mode as I had another early access build of JXInsight to deliver this week but I do know that it is not within the Guice code itself after having a very quick looking when identifying the allocation sites. I suspect in its generated proxies but I will confirm this with an update on the report at the weekend. regards, William
  7. Synchronized[ Go to top ]

    Hi, William. Why do you think there is synchronization here? There is no synchronization in this code path. There is exactly one thread local lookup and one volatile field access.
  8. Re: Synchronized[ Go to top ]

    Why do you think there is synchronization here?
    Hi Bob, Because there was thread monitor contention (blocking) reported. I will re-run the tests again now that you have questioned this but if I recall correctly this did only appear appear when I created a temporary probe around the factory.get() call which I did not report in the article because I did not complete this part of my analysis to the level I would normally do. I will be back. William
  9. Re: Synchronized[ Go to top ]

    Hi Bob, We are both right. There is no synchronization in the code path for this example but there is thread monitor contention indirectly via the object allocation execution. The thread monitor contention occurs is on the monitor associated with the static "lock" field within java.lang.ref.Reference class which is acquired by the current garbage collector thread and (under the hood) attempted to be entered by other threads busy creating Guice objects. I am not completely sure what are the conditions for such contention and whether this is present in all runtime implementations but the times that I have seen it has been when running benchmarks that are multi-threaded and allocation intensive which is the case for the Guice concurrent singleton test run. I have added a new Insight extension into the next EA build (#17) of JXInsight which allows the remote inspection of a number of lock ** sample ** sets with associated blocked and waiting threads. As soon as its tested fully I will post some screen shots showing the inspection with this benchmark. By the way the contention is present in both technology tests at allocation sites. regards, William
  10. Re: Synchronized[ Go to top ]

    Hi Bob, The EA #17 build of JXInsight 5.5 was published this weekend with a new JVM(Insight) extension for runtime inspection of Java thread monitors locks. I also published a new blog entry showing the inspection in action with "benchmark" test runs of Guice and Spring. Java Thread Locks By and large the initial "benchmark" results were more a test of the efficiency of GC than the technology though there are still some useful points to take away especially for those engineers who (amazingly) believe such reports without first questioning the analysis and methodology. Again there ** was ** monitor contention (not directly) within the execution frames of the Guice code. The JXInsight Probes are reporting correctly when one understands that the probes report thread metering during the measurement interval. This is very important as it means one can profile at whatever level of detail that is required without having to instrument (and filter) every method frame. The following article might help understand how this all works in relation to grouping and inherent metering. Tutorial: Probes API Kind regards, William Louth JXInsight Product Architect CTO, JINSPIRED "Performance Monitoring and Problem Diagnostics for Java EE, SOA, and Grid Computing"
  11. Re: Perspective[ Go to top ]

    This is not a Guice vs Spring war.
    I think the subject of the thread suggests this though. I didn't read the article looking for reasons to use Guice instead of Spring or vice versa, but the subject implies that you should find something along those lines. Instead, I'm pretty sure this is meant to show how JXInsight Probes can be used (which is very interesting), and thus the subject of the thread should reflect that.
  12. Re: Perspective[ Go to top ]

    Hi Rob, I really do not want to turn this into a JXInsight promotion thread. I would prefer that we take a step back and look at the way we perform an analysis on the execution and resource consumption behavior of an application or benchmark. I ran these tests with an early access build of JXInsight 5.5 so I could be putting myself up for a fall if any the results are wrong but nearly all metering data aligns to other metrics and the code itself. I focused on the data that I was sure was correct but really I hoped that people reading the post looked at the thought and analysis processed trying to shine through my words and pictures. The title is correct because it is an analysis of a benchmark that someone published stating how much faster Spring 2.5 was compared with previous tests. regards, William
  13. Re: Perspective[ Go to top ]

    Good point! What I found interesting is the tool (JXInsight Probes), rather that the Spring vs Guice 'war'. It is only too often that camps are set up, with followers for each side, both claiming to be 'the best'. Regards, Paul Casal Sr. Developer The Enterprise Open Source Billing System
  14. Pretty fascinating analysis. And not just by results (which are somewhat expected) but by how much one can digest from a tool like JXInsight. Nikita Ivanov. GridGain - Grid Computing Made Simple