Industry news: ILOG JRules and Drools Benchmarks

  1. ILOG JRules and Drools Benchmarks (3 messages)

    Daniel Selman's latest post on his ILOG blog compares the performance of JRules 6.6.1 with Drools 4.0.1 when running two of the "academic" benchmarks (Manners and Waltz). You can read Daniel's blog and run the benchmarks for yourself by visiting: http://blogs.ilog.com/brms/
  2. I agree the academic benchmarks are not good and no real conclusions can be drawn - they were designed to test pure rete engines only, and what they are testing can easily be negated with the simplest of optimisations. We need to get a number of real world use cases and (simpler) benchmarks together to get a clearer idea. Luckily this sort of information is starting to come through now. I'm working with a number of users who are stressing drools in a variety of ways and encouraging them to make sample code available. For instance this customer has found huge performance gains from using an out of the box installed version of drools, I'm hoping they will make an example available soon. http://blog.athico.com/2007/08/drools-vs-jrules-performance-and-future.html Mark http://blog.athico.com The Drools Blog
  3. More information on the benchmark[ Go to top ]

    Here is a thread on iLog's forum about my findings. http://forums.ilog.com/index.php?topic=53.0 peter lin
  4. Re: More information on the benchmark[ Go to top ]

    btw the ilog manual is now online, so you can checkout the docs they have for the various rule configurations of the optimised mode. It's great that ilog are being more open, shows some of the positive influences Drools is having on the industry in general.

    optimize method
    Optimizes the rules. After the optimization is finished, the ruleset is locked such a way that an exception is raised when a modification method of the ruleset is called.The optimization consists of determining whether a dynamic or a static agenda is needed for the ruleset, whether an iterated or a Rete rule is chosen.

    optimise method
    The optimization consists of determining whether a dynamic or a static agenda is needed for the ruleset, whether an iterated or a RetePlus rule is chosen, and guessing hashers.

    The best results of this mode occur when the ruleset is highly incremental. A large number of modifications (insertion, update, retraction) are applied on the working memory while the rules are executed.

    static agenda
    This mode delays the rule instance creation when it is needed to just before the rule is executed. Contrary to the default RetePlus, there is no rule instances list. This strategy may reduce the the memory and improve performance when rule execution leads to removal of some rule instances of the agenda.

    e JIT technology in this instance translates the condition part of each rule to Java bytecode. The bytecode is then used to evaluate the condition tests.
    When this property is set to true, the rule engine uses dynamic rule compilation in its internal algorithm to evaluate rules. In terms of performance, the JIT feature enables an execution speed similar to that of compiled rules.