Jesse Kuhnert: new OGNL coming soon?

Discussions

News: Jesse Kuhnert: new OGNL coming soon?

  1. Jesse Kuhnert: new OGNL coming soon? (10 messages)

    Jesse Kuhnert has posted that a new OGNL release is on the horizon, with speed increases. OGNL is an expression evaluation language, used in Tapestry, some versions of Struts, and other libraries.
    Much like Tapestry-Prop, the new set of OGNL enhancements rely on javassist to do incremental bytecode compilation / translation of your ognl expressions into their pure java equivalents. There are still plenty of kinks being knocked out here and there, but overall the changes are finally starting to form into something that might actually be usable. ... If you are wondering if / when this will be released, I really don’t know. All of the changes made so far have been local to my machine only - but if the patches are accepted then I’d optimistically expect a new maven2 snapshot version of it released sometime this week. The new changes have also been integrated / tested with Tapestry 4.1.2 already, we’re just waiting for the right infrastructure to show up to commit the changes to...

    Threaded Messages (10)

  2. Opensymphony will be taking over OGNL[ Go to top ]

    Both Drew and Patrick are pretty inactive right now, so Rainer is in the process of migrating the OGNL source code from java.net to opensymphony's svn repository. We had some bug fixes that needed to be addressed. Once that happens, if we get a patch from Jesse, I hope we can do a new release with these changes. Our efforts to integrate MVEL into xwork have stalled because we rely on OGNL-specific features so heavily.
  3. Actually, when I requested the move from dev.java.net, Rainer told me Patrick was already on it. Since Patrick is indeed pretty inactive, I have no idea when the move is going to be completed.
  4. Would like to see it in struts2, & webwork. Sudhir S Nimavat
  5. I'm sure if the patch is accepted it will be used in struts2 eventually. (the webwork people have also been kind enough to help me in my efforts, which probably isn't a random coincidence or act of kindness ;) )
    Would like to see it in struts2, & webwork.

    Sudhir S Nimavat
  6. Numbers are fishy.[ Go to top ]

    I fail to see how some of the numbers add up. In the first test, the interpreted mode is faster than the compiled mode and AS fast as natively compiled Java math expression, which would not even have a calculation since it would be a compile-time reduced literal. Yet, these numbers seem to suggest that parsing the text, and reducing the tokens interpretively, is AS fast as a no-work frame in java bytecode. That's just ummm... a little fishy, unless you've unlocked the panacea of "parsing without any overhead". Or maybe you have some front-end cache which is not even performing any reduction at all, in which case the number is meaningless. Can we actually see the code used to produce these tests.
  7. Re: Numbers are fishy.[ Go to top ]

    Hehe.... All in good time Chris...All in good time. All your base are belong to OGNL, let's just face it. =p
    I fail to see how some of the numbers add up. In the first test, the interpreted mode is faster than the compiled mode and AS fast as natively compiled Java math expression, which would not even have a calculation since it would be a compile-time reduced literal. Yet, these numbers seem to suggest that parsing the text, and reducing the tokens interpretively, is AS fast as a no-work frame in java bytecode. That's just ummm... a little fishy, unless you've unlocked the panacea of "parsing without any overhead". Or maybe you have some front-end cache which is not even performing any reduction at all, in which case the number is meaningless.

    Can we actually see the code used to produce these tests.
  8. Re: Numbers are fishy.[ Go to top ]

    I should at least clear up the confusion about what these tests reflect. The "interpreted" mode test doesn't show how long it takes OGNL to parse expressions - that would be redundant. The number is the result of repeatedly calling Ognl.get() (or set in the case of the map setter) on a pre parsed String expression .... That's the chain. Grammar file -> java source generation -> expression parse -> Simple set of java objects created representing logical structure of expression - which can then have any objects thrown at them -> get / set calls made using the expression. The compiled results here represent actual vm compilations using bytecode enhancement stuff from javassist. I ~guess~ you could try to measure the performance of expression parsing, but I'm not sure what point it would serve as most people care more about the get / set operations. (at least that's what I thought)
    I fail to see how some of the numbers add up. In the first test, the interpreted mode is faster than the compiled mode and AS fast as natively compiled Java math expression, which would not even have a calculation since it would be a compile-time reduced literal. Yet, these numbers seem to suggest that parsing the text, and reducing the tokens interpretively, is AS fast as a no-work frame in java bytecode. That's just ummm... a little fishy, unless you've unlocked the panacea of "parsing without any overhead". Or maybe you have some front-end cache which is not even performing any reduction at all, in which case the number is meaningless.

    Can we actually see the code used to produce these tests.
  9. Re: Numbers are fishy.[ Go to top ]

    Okay. MVEL's interpreted mode figures measure how long it from a cold start of the parser, to the fully resolved results from the expression. Not how long it takes to re-execute a pre-built AST. And yes, many people do mind how fast the parser can do this, and there are plenty of application where pre-compiled expressions are not doable, perhaps because they are dynamically assembled. MVEL also has a templating framework which relies heavily on having a consistently fast parser/interpreter. That being said, OGNL does not have a monopoly on bytecode generation for accessors. MVEL 1.2 beta, which will be available shortly has an optimizing JIT compiler. And the JIT compiler will be full coverage, for all supported syntax, including projections, regular expressions, etc.
    I should at least clear up the confusion about what these tests reflect.

    The "interpreted" mode test doesn't show how long it takes OGNL to parse expressions - that would be redundant. The number is the result of repeatedly calling Ognl.get() (or set in the case of the map setter) on a pre parsed String expression ....

    That's the chain. Grammar file -> java source generation -> expression parse -> Simple set of java objects created representing logical structure of expression - which can then have any objects thrown at them -> get / set calls made using the expression.

    The compiled results here represent actual vm compilations using bytecode enhancement stuff from javassist.

    I ~guess~ you could try to measure the performance of expression parsing, but I'm not sure what point it would serve as most people care more about the get / set operations. (at least that's what I thought)

    I fail to see how some of the numbers add up. In the first test, the interpreted mode is faster than the compiled mode and AS fast as natively compiled Java math expression, which would not even have a calculation since it would be a compile-time reduced literal. Yet, these numbers seem to suggest that parsing the text, and reducing the tokens interpretively, is AS fast as a no-work frame in java bytecode. That's just ummm... a little fishy, unless you've unlocked the panacea of "parsing without any overhead". Or maybe you have some front-end cache which is not even performing any reduction at all, in which case the number is meaningless.

    Can we actually see the code used to produce these tests.
  10. Re: Numbers are fishy.[ Go to top ]

    Interesting. I have to ask though - just as a sanity check - what exactly is the use case where cold start time / parsing of string content will matter more than what happens "in production" ? In development mode it would be "nice" , but hardly something that takes precedence over real long running production applications? The biggest reason why I wouldn't even look at mvel before setting out to make sure this problem is solved for Tapestry users is that your documentation states you don't support <= 1.4 jre's. While that may be nice for new libraries / frameworks / etc, a lot of us are stuck finding solutions in backwards compatible worlds. You of course don't have anything to be backwards compatible with yet. ;) When you say accessors do you mean that MVEL supports get AND set operations or only get? (can't tell if you were using java beans jargon or a more literal term ) OGNL may not have a monopoly on jit compiling, but it does on currently being the most full featured / mature / fast / integrated expression language for java. ;) I don't want to dissuade you from MVEL work though. I look forward to the day when I don't have to worry about mature libraries or jre's <= 1.4. Until then I do what I must... <blockquote>Okay. MVEL's interpreted mode figures measure how long it from a cold start of the parser, to the fully resolved results from the expression. Not how long it takes to re-execute a pre-built AST.

    And yes, many people do mind how fast the parser can do this, and there are plenty of application where pre-compiled expressions are not doable, perhaps because they are dynamically assembled.

    MVEL also has a templating framework which relies heavily on having a consistently fast parser/interpreter.

    That being said, OGNL does not have a monopoly on bytecode generation for accessors. MVEL 1.2 beta, which will be available shortly has an optimizing JIT compiler. And the JIT compiler will be full coverage, for all supported syntax, including projections, regular expressions, etc.
  11. jdk1.4[ Go to top ]

    We have now stopped JFDI development in favour of MVEL, Chris is incorporating most of our designs for a high performance pluggeable reflection based scripting language with future plans for JIT. For us it's important that the reflection language is as fast as it can be, before you go JIT. And to be frank ALL of the existing system suck with performance, which was why we did JFDI. http://markproctor.blogspot.com/2006/11/jfdi-new-business-action-scripting.html JBoss Rules is jdk1.4 and above, so we have worked with Chris to enable a MVEL JDK1.4 backport - so no problems there. Mark Proctor JBoss Rules Project Lead http://markproctor.blogspot.com