JHawk5 Java Metrics Tool Released

Discussions

News: JHawk5 Java Metrics Tool Released

  1. JHawk5 Java Metrics Tool Released (3 messages)

    For over 10 years now JHawk has been the leading Java Metrics tool for both Commercial and Academic use. We have led the market through ceaseless innovation –

    amongst our achievements we were the first to integrate with a visual IDE (Visual Age for Java) and the first to provide facilities to export to CSV, HTML and XML

    formats.

    But we don’t rest on our laurels – JHawk 5 (our latest release) takes another leap forward. Amongst the new features in our Professional version

    are –

    • A complete interface overhaul that makes JHawk even easier to use
    • Stand alone, Eclipse plugin and command line versions
    • A configurable Dashboard feature that provides an instantaneous overview of your code both within the application and in the exported HTML files
    • The ability

      to define your own metrics. These are written in Java and can use data from JHawk itself and external sources e.g. bug databases, source control repositories – if

      you can access it Java you can use it as a metric.
    • A JHawk Metric interchange format that allows you to record a snapshot of a code base and keep it for

      future use either in JHawk itself or in our new DataViewer product
    • JHawk Data Viewer – a standalone product that allows graphical and textual comparison of

      metrics over time. The JHawk Data Viewer uses data exported using the JHawk Metric Interchange format. The JHawk Data Viewer also provides a facility to export data

      to the Google Visualization API. By combining this with data exported using the JHawk Metric Interchange format Data Viewer can distil massive amounts of data down

      to a single Google Visualization. For example our online sample (http://www.virtualmachinery.com/visualizations/EclipseAnalysis.html ) (also available as part of

      the trial version download) shows the evolution of a number of key metrics over 7 different releases of the Eclipse source code from version 1.0 to version

      3.3.

    Eclipse plugin screenshot
    Screenshot from Eclipse plugin

    Reflecting our wide range of users we offer a number of different license options –

    Personal, Professional, Site and Corporate). Our Pricing and upgrade policy is transparent and cost effective – even our Corporate license cost less than one days
    fees for a high level consultant. We also offer an Academic License Program for research facilities. You can find out more about JHawk5 here. For full details of our licensing and pricing see here. You can download our trial version here. The trial version includes much of the documentation that accompanies the full release.

  2. I work in metrics, so I'm happy to see another good looking tool available. I've got a two problems with JHawk.

    1. What do you actually measure? I took a quick look at your demo and many metric values are off. I parsed an application with a fan out value of 5-10 (to application classes), but its CBO is 1. Obviously, you are filtering classes. This is not documented. The problem with not documenting this is that it is impossible to migrate from another metric extraction tool to yours without readjusting rules.

    2. In your visualisation, it should be able to apply a log to metric values as it is sometimes useful to minimise the visual impact of extreme values (which are normal in systems).

    3. What is the cyclomatic complexity of a class? The number of independent paths through classes is pretty hard to compute. I assume it is the maximum value of methods (without considering polymorphism)

    4. Some information on your web site is incorrect:

    -> What you call LCOM* was by Brian ??Henderson-*Sellers*

    ->"The only consideration in calculating this metric is whether only the classes written for the project should be considered or whether library classes should be included." Not really. How do you account for polymorphism? Technically, CBO specifies usage counts.

    -> LCOM* values of >1 being suggestive of bad design. That's obvious, but it is an outlier case (methods would generally not use any attributes). The metric was however designed so that 1 would be very bad.

    -> CK did not say that WMC should count unity. If people want to ignore the internal complexity of methods that's something, but it is not an inherent problem to the metric (which I did not see in the tool)

    -> You need to spell-check your web site (eg: dependancies, Boochs)

     



  3. Hi Stephane

    Thanks for your comments - I'll try and answer them one by one



    <<1. What do you actually measure? I took a quick look at your demo and many metric values are off. I parsed an application with a fan out

    value of 5-10 (to application classes), but its CBO is 1. Obviously, you are filtering classes. This is not documented. The problem with not

    documenting this is that it is impossible to migrate from another metric extraction tool to yours without readjusting rules. >>

    The demo actually only analyses a small subset of the classes that you select. For the purposes of CBO only those classes in the set being

    analysed will be considered. Therefore you will get values that do not reflect your initial selection, only the subset that the demo has

    randomly selected.

    <<2. In your visualisation, it should be able to apply a log to metric values as it is sometimes useful to minimise the visual impact of

    extreme values (which are normal in systems).>>

    I agree  - this would be a useful option to provide. I will certainly add it to the list for the next version.


    <<3. What is the cyclomatic complexity of a class? The number of independent paths through classes is pretty hard to compute. I assume it is

    the maximum value of methods (without considering polymorphism)>>

    At class level we provide the Total Cyclomatic Complexity (TCC) - the total Cyclomatic complexities of the methods in the class, the Maximimum Cyclomatic Complxity (the value for CC of the method with the highest CC) and the Average Cyclomatic Complexity (the average cyclomatic complexity of the methods in the class). These values are not designed as metrics per se but are supplied for use as pointers to possible 'hot spots' in the code - they are supplied at System and Package level as well  


    <<4. Some information on your web site is incorrect:

    -> What you call LCOM* was by Brian ??Henderson-*Sellers*>>

    We meant that the value was calculated using the method described in Henderson-Sellers book (Object-Oriented Metrics: Measures of Complexity).

    << ->"The only consideration in calculating this metric is whether only the classes written for the project should be considered or whether

    library classes should be included." Not really. How do you account for polymorphism? Technically, CBO specifies usage counts.>>

    Yes - but the argument here could be that the programmer only has control over the code that they have written - I'm not saying that the argument is correct - only that it can be made

    <<-> LCOM* values of >1 being suggestive of bad design. That's obvious, but it is an outlier case (methods would generally not use any

    attributes). The metric was however designed so that 1 would be very bad.>>

    Don't have any argument with this.

    <<-> CK did not say that WMC should count unity. If people want to ignore the internal complexity of methods that's something, but it is not an

    inherent problem to the metric (which I did not see in the tool)>>

    WMC is provided as the Total Cyclomatic Complexity for the class (see above)


    <<-> You need to spell-check your web site (eg: dependancies, Boochs)>>
    Will do!

    Thank you for your critical analysis - it's always good to know that people read and use the stuff we write.

    JHawk ships with documentation describing the calculation of each of the metrics. If you don't like the way a particular metric is calculated we provide a method to add your own metrics to JHawk using data collected by the tool as well as external data. Extensive documentation and examples are provided.

    Regards
    The JHawk Team
    Virtual Machinery


     

  4. Thanks for the response. 


    The demo actually only analyses a small subset of the classes that you select. For the purposes of CBO only those classes in the set being analysed will be considered. Therefore you will get values that do not reflect your initial selection, only the subset that the demo has randomly selected.

    Understood. FYI, your demo splash screen says that the files are parsed, but not displayed. In any case, it is hard to know what you measure if you don't document it. For example, I don't know if you consider declared references to object, declared references to any type (including interfaces), or potential real references using type analysis algorithms.


    At class level we provide the Total Cyclomatic Complexity (TCC) - the total Cyclomatic complexities of the methods in the class, the Maximimum Cyclomatic Complxity (the value for CC of the method with the highest CC) and the Average Cyclomatic Complexity (the average cyclomatic complexity of the methods in the class). These values are not designed as metrics per se but are supplied for use as pointers to possible 'hot spots' in the code - they are supplied at System and Package level as well  

    For your information, averages (if you use means) do not make sense as complexity typically follows a power-law distribution. In any case, I'm actually not sure why you would use averages to track "hot spots" as maximum values would be better indicators.

    We meant that the value was calculated using the method described in Henderson-Sellers book (Object-Oriented Metrics: Measures of Complexity).

    Check how you wrote his name on http://www.virtualmachinery.com/jhawkmetricsclass.htm


    Thank you for your critical analysis - it's always good to know that people read and use the stuff we write.

    Technically, I use metrics extracted by McCabeIQ and have developed home-made tools, but I'm always interested in the whole metric extraction tool ecosystem.


    JHawk ships with documentation describing the calculation of each of the metrics. If you don't like the way a particular metric is calculated we provide a method to add your own metrics to JHawk using data collected by the tool as well as external data. Extensive documentation and examples are provided.

    The tool seems nice and is reasonably priced; I wish you guys good luck. I'll keep an eye on your product.