Discussions

News: JBoss Releases JSF Testing Tool - JSFUnit 1.0 Beta 1

  1. JBoss is pleased to announce a community beta release of JBoss JSFUnit. JSFUnit is an open source community dedicated to testing JSF applications, based on Cactus and JUnit. JSFUnit tests are based on JUnit and Cactus. It provides three different testing tools:
    • In-container Testing Framework This is for testing both client and server side JSF artifacts. Test scope ranges from fine-grained tests of individual classes to full-fledged integration testing of a running JSF application.
    • Framework for JSF Static Analysis Testing This allows you to test your JSF configuration so that you find config problems early.
    • JSFTimer for Performance Testing of the JSF Lifecycle This shows you how long each phase of the JSF lifecycle took for a given request. It also gives you total JSF lifecycle time. The JSFTimer allows you to write threshold tests that fail whenever a use case doesn't meet your performance standards.
    JSFUnit differs from most testing tools in that it allows testing of a running JSF application. You submit a real HTTP request and then examine the real JSF artifacts such as the FacesContext and the component tree. Also, you can retrieve and examine managed beans using EL expressions. This makes it perfect for unit tests and integration tests alike. You can even look at the HTML that was returned to the client. As mentioned above, we also throw in tools for static analysis and performance analysis. So give it a try and let us know what you think. So long, and thanks for all the fish, Stan Silvert http://www.jsfunit.org

    Threaded Messages (26)

  2. I have never been convinced that unit testing is applicable to performance engineering (or testing) especially when the focus is on clock timers which offer very little information outside a proper performance test environment which this is not. Anytime I see the mention of unit test assertions with regard to performance I am reminded of this famous quote:
    The reserve of modern assertions is sometimes pushed to extremes, in which the fear of being contradicted leads the writer to strip himself of almost all sense and meaning.
    Which translates to engineers pushing up the clock times values used in asserts to higher values than the applications performance objectives to handle the outliners caused by various issues with the environment including non-deterministic GC cycles and resource initializations costs as well as inadequate hardware capacity. It pretty simple to see and understand that most unit testing environments are not adequate performance testing environments. By the way there are some counters that are meaning and in-dependent of environment but these are rarely used because the unit test extension is narrowly focused on one layer, one thread, one process and one poor mans metric. If you are doing real performance testing of JSF applications then you might be interested the following blogs which show how to relate performance profiles with JSF phases - all contextual. JSF Contextual Call Tracing and Profiling http://blog.jinspired.com/?p=8 Resource Metering of JSF http://blog.jinspired.com/?p=149 William
  3. Your product is nice too. Stan Silvert http://www.jsfunit.org
  4. Your product is nice too.
    No need to state the obvious. Instead could you maybe elaborate on your experience with performance unit testing in the wild with JSF. How really meaningful is this (limited) data? Do you think developers are actually able to define reasonable performance criteria at such early stages taking into account the test environment? How does one go about that? Guesses? Failures and Hikes? Can developers really isolate the execution from factors I hinted at to ensure that failures are valid? If you can convince me that this is really useful and not open to abuse then I will add this into my own product and allow a developer to make assertions on the execution performance of any tagged and distributed activity. We have had thoughts on this before but in the final analysis it seemed more ego driven "look what I can do" than actually useful. By the way I have always been conservative in the level of automated UI (screen & state) testing that can be done. The problem which is not just related to JSFUnit is the time spent fixing maintaining all those tests that are based on a typically very volatile aspect of any product. William
  5. How really meaningful is this (limited) data? Do you think developers are actually able to define reasonable performance criteria at such early stages taking into account the test environment? How does one go about that? Guesses? Failures and Hikes? Can developers really isolate the execution from factors I hinted at to ensure that failures are valid?
    Your argument seems to be that the tool could be abused. I'll agree to that. Understand that JSFUnit is an in-container testing tool that can be used for acceptance testing in the just the proper performance environment that you suggested. The JSFTimer could be used by someone other than the original developer. For instance, a QA team can set up JSFUnit acceptance tests that measure performance of certain use cases. Then they either do asserts when it exceeds a threshold or simply create reports using JSFTimer data.
    By the way I have always been conservative in the level of automated UI (screen & state) testing that can be done. The problem which is not just related to JSFUnit is the time spent fixing maintaining all those tests that are based on a typically very volatile aspect of any product.
    I'm glad you brought that up. Like I said in the post, JSFUnit is different. It's not like those black box tools that key off of the HTML output (though you can still do that in JSFUnit if you really want to). In JSFUnit tests, you use the JSF component ID to reference components on both the client side and server side. So your test is shielded from changes to the more volatile parts of your UI such as style and layout. JSF itself is what makes this kind of testing possible. Because JSF is a component framework, you test at the component level instead of the HTML level. IMO, this is the proper level of abstraction to test JSF applications. Stan Silvert http://www.jsfunit.org
  6. For instance, a QA team can set up JSFUnit acceptance tests that measure performance of certain use cases. Then they either do asserts when it exceeds a threshold or simply create reports using JSFTimer data.
    Why would they not apply the assertions like most performance testers and operations staff do today via automated probe scripts (http requests)? Most of these guys will only request a breakdown when a SLA has been breached. I would be amazed to find a SLA or performance objective defined at such a fine granularity. I would be amazed to also find a QA person understanding the various JSF request lifecycle phases and being able to define a threshold for each use case within a large web application. Note this is different from tracking behavioral changes across product releases on a relatively fixed test bed were such data might be useful in performing delta analysis when combined with measurements from other sources internal and external to the JVM. Here I am talking about performance engineering which is not the same as performance testing. William
  7. So it MIGHT be abused or someone MIGHT not use it correctly. Or you might want to use a tool that's more complicated and/or more expensive. That's a lot of speculation. JSFTimer is small feature of a brand new open source project. The great thing about the open source community is that we don't have to guess what developers want and then come up with wild scenarios about how it may or may not be used. If people like the JSFTimer, it will be enhanced. If they don't find it useful, we'll concentrate on the other features. Stan Silvert http://www.jsfunit.org
  8. Or you might want to use a tool that's more complicated and/or more expensive.
    I never said that. I really do not know what you are talking about but maybe that is the same at your end. It is really sad that every time someone questions the usefulness of an OSS tool|solution or implicitly compares it to another commercial tool|solution we get the usually pathetic response: 1. Tool|Solution X is complicated. 2. Tool|Solution X is expensive. Which normally translates to 1. Tool|Solution X is feature rich (because it has been used to tackle a wide set of problems) 2. Tool|Solution X has upfront costs (and pays for itself very quickly). Sorry but I prefer to have a tool that provided me a comprehensive performance model with useful and interactive visualizations over looking at the textual output of some junit tests. The benefits of a stored performance model is that one could easily apply and reapply different thresholds after the fact on any version of the performance model. In fact that is what we do today with our automated performance inspections.
    The great thing about the open source community is that we don't have to guess what developers want and then come up with wild scenarios about how it may or may not be used.
    Is this not exactly the point? Wild scenarios = JSF Phase Thresholds. By the way when one of the component state assertions failed what is the general process in determining what went wrong? Especially if it might be an intermittent issue or related to previous state changes. Personally I would prefer the ability to offline inspect the request/view state for those requests that failed. Framing State in JUnit & TestNG Tests http://blog.jinspired.com/?p=178 But I could always inspect the state at whatever phase but in practice this is simply to much overhead for all use cases. http://blog.jinspired.com/?p=9 If I was to attempt a unit testing tool related to such a volatile aspect of any application (yes even the components) I would focus more on delta analysis than the creation of units tests. Unit tests could be used to frame the window of change but at the end I would like a report that stated a particular aspect (view state, component state, ....) of the application changed in this release. Then I would look to see whether this was planned or expected. Not assertions but more like warnings or heads-up. I find it easier to work with changes than to construct assertions on a model that is relatively unknown at the stage of test creation. William
  9. Or you might want to use a tool that's more complicated and/or more expensive.
    I never said that. I really do not know what you are talking about but maybe that is the same at your end. It is really sad that every time someone questions the usefulness of an OSS tool|solution or implicitly compares it to another commercial tool|solution we get the usually pathetic response: 1. Tool|Solution X is complicated. 2. Tool|Solution X is expensive. Which normally translates to 1. Tool|Solution X is feature rich (because it has been used to tackle a wide set of problems) 2. Tool|Solution X has upfront costs (and pays for itself very quickly). Sorry but I prefer to have a tool that provided me a comprehensive performance model with useful and interactive visualizations over looking at the textual output of some junit tests. The benefits of a stored performance model is that one could easily apply and reapply different thresholds after the fact on any version of the performance model. In fact that is what we do today with our automated performance inspections.
    The great thing about the open source community is that we don't have to guess what developers want and then come up with wild scenarios about how it may or may not be used.
    Is this not exactly the point? Wild scenarios = JSF Phase Thresholds. By the way when one of the component state assertions failed what is the general process in determining what went wrong? Especially if it might be an intermittent issue or related to previous state changes. Personally I would prefer the ability to offline inspect the request/view state for those requests that failed. Framing State in JUnit & TestNG Tests http://blog.jinspired.com/?p=178 But I could always inspect the state at whatever phase but in practice this is simply to much overhead for all use cases. http://blog.jinspired.com/?p=9 If I was to attempt a unit testing tool related to such a volatile aspect of any application (yes even the components) I would focus more on delta analysis than the creation of units tests. Unit tests could be used to frame the window of change but at the end I would like a report that stated a particular aspect (view state, component state, ....) of the application changed in this release. Then I would look to see whether this was planned or expected. Not assertions but more like warnings or heads-up. I find it easier to work with changes than to construct assertions on a model that is relatively unknown at the stage of test creation. William
  10. If I was to attempt a unit testing tool related to such a volatile aspect of any application (yes even the components) I would focus more on delta analysis than the creation of units tests.
    That's fine William. Do your delta analysis. I like unit tests and integration tests. JSF is a component framework. JSFUnit lets you test at the JSF API (component) level. But really, it goes deeper than that. JSFUnit lets you test in-container, managing both the client and server side of each request - all in the same test. You literally have access to everything from the client-side HTML/HTTP all the way down to the database. It breaks the barriers between black box and white box testing. So we leave it up to the developer to decide what kind of analysis is appropriate. JSFUnit gives you the freedom to do that. Stan Silvert http://www.jsfunit.org
  11. Delta Analysis" Object ?= Object[ Go to top ]

    That's fine William. Do your delta analysis. I like unit tests and integration tests.
    Stan, this might come as a big surprise to you but effectively that is what a unit test is performing. A typical unit test is comparing a limited value set from one model (static across tests and largely implicit) with a limited value set from another model created and changed during the test method execution. That is why we have all those assertEquals(?,?) overloaded methods in JUnit and TestNG API. A problem with this approach is that you can easily miss side effects not visible from the value set extracted from the test model. To be honest I have not thought much about this area or done any research but if I was tasked with designing such a tool I would look to make the model more explicit within my future generation testing framework. I would then only need one asset method assert(Model, Model). The models would be subsets (and abstractions) of the underlying information/state model. Wow this looks very familiar - ITIL CMDB. William [I have discounted procedural style testing]
  12. Re: Delta Analysis" Object ?= Object[ Go to top ]


    That's fine William. Do your delta analysis. I like unit tests and integration tests.

    Stan, this might come as a big surprise to you but effectively that is what a unit test is performing.

    A typical unit test is comparing a limited value set from one model (static across tests and largely implicit) with a limited value set from another model created and changed during the test method execution.
    I did misunderstand what you meant by delta analysis. So if I am understanding you now, your beef is with unit testing in general. No wonder I misunderstood. That's a very unusual position to take. Suffice it to say, thousands of developers find unit tests and unit test tools to be useful and practical. I know I do. But if you want to take a completely different approach I'd be interested in seeing the result. Stan Silvert http://www.jsfunit.org
  13. Disregarding the cheap punches[ Go to top ]

    So if I am understanding you now, your beef is with unit testing in general
    I read the original post to question the ability to do proper performance testing in the context of unit-tests. A valid and pretty interesting question in my opinion. I am not a huge fan of JSF myself, but I am extremely excited about all the tool support it is getting. Im looking forward to try this tool out soon. Keep up the good work!
  14. Re: Disregarding the cheap punches[ Go to top ]

    I read the original post to question the ability to do proper performance testing in the context of unit-tests. A valid and pretty interesting question in my opinion.
    I agree. It actually is an interesting question. And I don't claim to know the final answer on that one. William does make this claim but I'm not convinced he is right. JSFTimer was a simple feature to add to JSFUnit. If others find it useful, then that's great. If not, no harm done. I do know that at the very least, those in the JUnitPerf community do find this kind of test to be worthwhile. JSFTimer just brings JUnitPerf to the JSF lifecycle. Stan Silvert http://www.jsfunit.org
  15. Re: Disregarding the cheap punches[ Go to top ]

    William does make this claim but I'm not convinced he is right.
    The last person that stated a similar conviction was also a JBoss developer and he was proved wrong. Tell me why I am wrong because up to now you are the one making claims without any evidence or accredited experience. Whereas I have tried to explain why I believe the approach is wrong - and this is my area of expertise.
    I do know that at the very least, those in the JUnitPerf community do find this kind of test to be worthwhile.
    Does this community include JBoss? Could that be the reason why to date JBoss has not published a single industry standard benchmark result for their application server? You [JBoss] have all been busying writing and rewriting performance unit tests trying to determining what threshold values to use. Count? William
  16. Re: Disregarding the cheap punches[ Go to top ]

    Wow William, you sure do like to talk a lot.
  17. Re: Disregarding the cheap punches[ Go to top ]

    Wow William, you sure do like to talk a lot.
    Its like watching a game of tag isn't it? You're it..
  18. Re: Disregarding the cheap punches[ Go to top ]

    Could that be the reason why to date JBoss has not published a single industry standard benchmark result for their application server?
    Benchmarks are for idiots.
  19. Re: Idiots[ Go to top ]

    Benchmarks are for idiots.
    Benchmarks are a very useful way of comparing performance for particular tasks. Caring about the performance of your app server is an idiotic concern? Says much about your attitude. Well done.
  20. Re: Delta Analysis" Object ?= Object[ Go to top ]

    I did misunderstand what you meant by delta analysis. So if I am understanding you now, your beef is with unit testing in general. No wonder I misunderstood. That's a very unusual position to take. Suffice it to say, thousands of developers find unit tests and unit test tools to be useful and practical. I know I do.
    Stan again you are completely missing the point but if makes you feel more comfortable and confident to have "thousands of developers" on your side then fair enough I do have a "beef" with unit testing when applied incorrectly such as in your case were you mix wrongly it with integration testing and performance testing. Units tests are useful because they narrow the information model (population) sufficiently for us to construct assertions (sample) that sufficiently cover all relevant state aside affects. One can narrow the size of the sample whilst still maintaining a high level of confidence because the developer of the class and test has a very good understanding of the aspects (volatile state) of the model related to the tests execution. Now it is very easy to see that the typical sample found commonly within a unit @Test method does not provide the same level of confidence when the model is suddenly enlarged covering many other systems and components. One could have a pass but the test could have introduced an infection in a related component that might not be visible until after the test execution. Of course the solution is to expand the sample model but this is not scalable because (1) the developer is unlikely to understand the complete model and to be able to identify the important aspects (2) the effort in creating and maintaining such tests because enormous with such primitive unit testing techniques (assertEquals(x,y)). In my opinion a much more efficient approach is to continuously baseline a sufficiently large state model and then use much more broader delta analysis techniques to determine possible (state) infections. This is not a perfect solution as the structure (not the value) of state models is subject to change whilst still maintaining the interface contracts. But I think this is an area worth investigating rather than wasting time in a technique that is only ever likely to offer a minimal sanity check for a high effort. William
  21. I wanted to add that I use the term "state" very loosely. State is not just confined to the values of fields within the persistent model. It can also encompass transient and in-flight request state especially between component boundaries. Yes, a tall order for a tool|solution but I have never being one to dream|plan|execute things in half measures at least when it is technical. William
  22. [William] Yes, a tall order for a tool|solution but I have never being one to dream|plan|execute things in half measures at least when it is technical.
    In fact there is a solution on the market that apparently addresses the problem in a similar way to what I described. AgitarOne http://www.agitar.com/solutions/products/software_agitation.html Which uses observations and assertions. I knew of the name but never really investigated the underlying technique used until now when someone referred me to the site. JXInsight includes AOP based Observe & Frame extensions for capturing runtime state diagnostics when a problem is detected in production but from this product we can see that the same technique can also be used for integration testing. Personally I like the idea of inspecting and exploring the actual state model and then applying assertions which this tool seems to support. William
  23. Re: Delta Analysis" Object ?= Object[ Go to top ]

    I do have a "beef" with unit testing when applied incorrectly such as in your case were you mix wrongly it with integration testing and performance testing.
    Now you're just spewing nonsense. Who said anything about mixing unit tests, integration tests, and performance tests? JSFUnit is simply a tool that allows you to do all of those things. How well or poorly you use it is up to you. Stan Silvert http://www.jsfunit.org
  24. Re: Delta Analysis" Object ?= Object[ Go to top ]

    Maybe you should read what you write?
    Test scope ranges from fine-grained tests of individual classes to full-fledged integration testing of a running JSF application.
    For instance, a QA team can set up JSFUnit acceptance tests that measure performance of certain use cases.
    I did not mean to imply mixing in the context of a test execution though you have yet to actually come back on my original question with any sort of reply. The "mixing" reference was in relation to the usage of a unit testing framework|tool|approach for integration|acceptance|performance testing. William
  25. Re: Delta Analysis" Object ?= Object[ Go to top ]

    The "mixing" reference was in relation to the usage of a unit testing framework|tool|approach for integration|acceptance|performance testing.
    I can use a hammer to drive a nail or pull one out. I don't see why a tool can't have more than one use. Stan Silvert http://www.jsfunit.org
  26. William, mate, I actually think you have a really nice product but you're making an ass of yourself in this thread. Why all the negativity? If Stan's tool isn't useful to you, don't use it...
  27. Hi, If I were to look at the brighter side of this thread then I would agree with William that there is more to testing than many of us really know. Seems that this is a serious issue we face everyday. Generally developers are only aware of the basic benefits of unit testing and not what should be tested and how it should be done. Thanks, Mohan