Discussions

News: Business Readiness Rating initiative underway for open source

  1. Carnegie Mellon University West Center for Open Source Investigation, along with cosponsors such as Intel and SpikeSource, has begun an initiative, called Business Readiness Rating, intended to enable the entire community (developers and enterprise adopters) to rate software in an open and standardized way.

    This initiative is spurred by the existence of over 100,000 open source projects listed on various sources, such as SourceForge, CodeHaus, Tigris, Freshmeat, Java.net and others. (For example, there are 375 content management apps alone.) While some projects are of high quality, others are less mature and present risk. Evaluation is often made on an ad hoc basis, using homegrown mechanisms and without access to useful assessment data or methods.

    The BRRI is an attempt to give companies a trusted, unbiased source for determining whether the open source software they are considering is mature enough to adopt. It offers open source developers a measure of reliability on their own projects as well as others', to improve reliability across the board.

    The model proposes standardizing on different types of evaluation data and grouping them into categories such as functionality, usability, security and scalability in a four-step process. The current proposed use case is that a user inputs requirements and ranks those requirements, and the BRRI would then yield scores that ranked packages according to those requirements, with those scores being based on feedback from other users.

    Phase one is a public comment period and open source developers are being asked to review the proposal and provide feedback.

    Threaded Messages (10)

  2. I think this is good idea. Of course, many good ideas have gone awry.

    What many companies, corps, orgs, individuals don't understand is that OSS dev is different than pay-for dev. And there are differences within OSS dev. Things like release numbers don't mean the same thing. Some projects, while production ready, might not be 1.0. Of course the opposite is true too. :) I had a conversation with someone the other day and I was sort of "chastised" for presenting projects they were not 1.0 or greater. Mind you, these were for internal use and mostly for trying concepts out. Of course, these projects were more than production ready.

    So in light of the above, I would welcome the blessing of a well known organization on these projects.
  3. I think this is a great idea. With so many open-source projects available and competing in the same spaces it is difficult to decide if a given product should be used or passed over.

    Some companies have resorted to brand names, "If it is from Apache, ok, otherwise no".

    Hopefully the initiative won't devolve into a OSS popularity contest.

    Now if we could only get the same sort of thing going for off-the-shelf products. Is feature X in WebSphere Business-Ready?
  4. Excellent Idea[ Go to top ]

    This is an excellent idea. Certianly helps in separating wheat from chaff. I hope it has enough academic rigor and is not being swayed by commercial intrests and lobbying.
  5. cool[ Go to top ]

    NEEDED for SO long.

    I wish them success. It is going to be difficult to make this work without the BIG guys "APACHE" "SUN" "IBM" "JBOSS" and lot others
  6. Good initiative![ Go to top ]

    This is a very good initiative. It does not only benefit businesses evaluating Open Source software, but it also allows software projects to improve in areas that potential users consider important.

    For example, we believe that our SourceForge project, XINS, is a high-quality software product. However, it is currently mostly marketing and user perception that dictates how our software is rated. This Business Readiness Rating allows us not only to improve in the described areas but also to describe how (well) we perform in these areas.
  7. Freeware vs free market.[ Go to top ]

    At first I laughed at this. Business wants software of officially assured quality but doesn't want to pay for it. And this when commercial ISVs have mostly balked at achieving quality certification by ISO or SEI, presumably since customers don't demand it. So I initially thought BRR is doomed. Isn't CMU the school that hosts SEI, in addition to BRR?

    But now I think about open source developers. Till now these volunteers never knew when a project was done. With BRR certification as a finish line, the open source developer clearly knows when his project is complete enough to put on his resume and possibly walk away. The advent of BRR is a cultural milestone. But the BRR whitepaper mentions CapGemini's Open Source Maturity Model, which presumably never got traction, so precedent is against BRR flourishing.

    The BRR whitepaper actually discusses metrics, including for usability, "How good is the GUI?" That's a direct quote from the paper. Is objective evaluation of a GUI even possible? If so, why do commercial GUIs get to invade the market without any expectation of a scientific evaluation? Do customers really care about science? The whitepaper also gives scalability as a metric. Surely scalability is more objective than usability, yet even scalability is so hard to quantify in an uncontroversial way.

    The whitepaper also seeks to quantify "What is the level of the professionalism of the development process and of the project organization as a whole?" (direct quote). So it seems CMU is back to SEI CMM. Tigris is a premier open source shop, yet look at their bug tracking product, Scarab. It's utter trash. Would Scarab's elaborate design documents earn it extra credit that boosts its BRR score above possibly better rivals? Is staging at the few major open source depots (SourceForge, FreshMeat, etc) a prerequisite for BRR? BRR scores according to downloads, something easily spoofed. Is the market's invisible hand so feeble that self-appointed BRR is necessary to expose best-of-breeds?

    I suspect BRR is only politically possible with freeware. Commercial ISVs are an industry that would never allow their wares to be scrutinized that way. Look at the backflips appserver vendors do to prevent comparative benchmarking. Commerce would sue BRR out of existence. If BRR does flourish, how many rival commercial ISVs will perish? Does this imply that commerce is an inferior way to create general purpose products?
  8. Meaningful metrics[ Go to top ]

    I'm not sure that the paper brings anything worthwhile and I find a great deal of confusion from the people who wrote it.

    Metrics numbers are at most useless: do you seriously expect every opensource component to have the same activity, bug reports, etc ? If we were to compare Tomcat, Jetty and Resin we would come up with very different numbers. Yet all 3 have an established reputation, the latest 2 especially among hardcore developpers.

    It is extremely hard to rate an opensource software component. It is usually a mix of ml activity, code readibility, documentation, ease of use, and the services it fulfills for a given effort vs DIY.

    The number of bug reports is actually the most misleading metric and people who don't understand the purpose of a bug reporting tool and the nature of software usually will come up with exactly what should not be done.

    For instance, the number of bugs being a direct function of the number of users, I expect to find a lot more bug into a highly successful product with a wide range of users than a similar component with very low user adoption. Yet the BRR suggests exactly the opposite.

    Note: It is sadly common in IT companies to have a policy of not logging any bug at a certain point of the project release because the customer will then assume it is not of a sufficient quality (horror ! there are bugs ! how come ??) and refuse to agree for the delivery..and thus not pay for the software.


    Anyway, to come up to how I evaluate an opensource product:
    - I'm browsing the archives to get a general feeling of the spirit, tone, ...
    - I look at the license
    - I identify the founders, lead developpers, where they are coming from. If it is a project that has been developped by a small set of people within a single company, this generally smells bad because they are not able to communicate well over mailing lists and all decisions are done privately (over the desk, etc...) which makes it hard for people to follow the product
    - I look at any comments in code in the form of //TODO, //FIXME, CVS comments, etc.... that indicates they care about others (and themselves). For instance no comments for CVS checkins is highly problematic because people will not waste time to guess at a diff to figure out what has been checked-in (assuming cvs commits mails fly in the mailing list).
    - I look at the source code (is it easy to understand ? or is it a complete mess with 25 negations and 400 SLOC / method ) and try to figure out the architecture and dependencies. If I can't understand a word of it this smells bad, because it will be hard to maintain and to provide patches, ultimately this means low quality.
    - I try to build it. If that does not build out of the box and instructions are missing to do so, wow, houston, we have a serious problem.

    Ultimately what will make an opensource project successful is traction. Traction will be done by 'marketing', which can be done via carefully chosen weblogs from leading individuals or organization, articles, books, software conferences, portals, ... and of course the topic it is supposed to cover.. I'm afraid the next cms, web framework or persistence api will have a very hard time to get some decent visibility considering the mass of existing ones today...

    So as I see it in the document, people might still see opensource software as a mean to get away with something and consider that a thousand monkeys are working on their free time to solve their problems...and not even donate a line of documentation.

    Clearly, people do opensource projects and contribute code on their free time because they like to brainstorm on puzzles and do new things. It's a brain game. Maintenance, is actually a bit of a boring task (except if you have access to a bunch of high-end servers to play with for some time to optimize the components).
    Graphics design is also a fun and creative task that you can do free time.. as for documentation. I have rarely met someone that enjoys doing documentation. So if you want decent documentation, there is a need to pay someone for that...same for integration and scalability testing. It might be a fun thing to do, but ultimately, there is a need for decent tools ... and hardware.

    Maybe Intel|HP|Sun could eventually have a plan to donate access for a wide range of high-end computers to a couple of selected projects in order to add traction to OSS, just like Google did for Summer Of Code.

    Is anyone interested to do some scalability test on a HP 9000 Superdome and a Sun Fire E25K ? :o)
  9. Meaningful metrics[ Go to top ]

    I'm not sure that the paper brings anything worthwhile and I find a great deal of confusion from the people who wrote it.
    Do you feel the same about SEI's Capability Maturity Model scoring? Has anyone ever worked at a level 4 or 5 shop? I haven't, so I can only wonder what it's like.
    Metrics numbers are at most useless...
    The whitepaper's metrics seem good enough, I hope. It's a bright idea to formally study the best open source efforts. I'm guessing less than 1% of SourceForge's 100,000 projects would get scored, and about half would get certification.
    ...do you seriously expect every opensource component to have the same activity, bug reports, etc ?
    You mentioned the whitepaper, but did you see Appendix 2, "The Characteristics of Mature Open Source Software"? It's an interesting list of 25 traits, including "#6 There is a well-defined process to enter the core development team.", which seems to imply that only teams that are growing or have grown are certifiable. Then there's characteristic #12, "Books are readily available.", which means that the market must already have chosen the project before it BRR can chose it, which makes the added value of BRR less obvious.
  10. Meaningful metrics[ Go to top ]

    Do you feel the same about SEI's Capability Maturity Model scoring? Has anyone ever worked at a level 4 or 5 shop? I haven't, so I can only wonder what it's like.

    AFAIK the SEI CMM(I) model does not provide you with any metric that tells you '20 is good, 30 is bad'. Rather it provides you with an organizational process improvement plan where it is your responsability to gather and analyze historical data to develop and calibrate your estimation model. Data collected highly depend on the project type (embedded, web, ...) and team, so that makes perfect sense.

    In short it provides guidance so that the software organization can grow from a chaotic environment toward a mature and disciplined software process.

    The different CMM levels actually refer to the maturity of the process within the organization. If you are at Level 4, that means you have a quantitative understanding of both the software process and the software work products..at level 5, that means you are able to measure software process improvement in a continuous way(and thus capable to optimize..and adapt)
    You mentioned the whitepaper, but did you see Appendix 2, "The Characteristics of Mature Open Source Software"? It's an interesting list of 25 traits, including "#6 There is a well-defined process to enter the core development team.", which seems to imply that only teams that are growing or have grown are certifiable.

    Yes I have read the appendix, and as I said, I find that they are missing the most fundamental and objectives metrics: code quality/readability and community acceptance/responsiveness.
      Then there's characteristic #12, "Books are readily available.", which means that the market must already have chosen the project before it BRR can chose it, which makes the added value of BRR less obvious.

    Yes. I agree. If the model is supposed to rate products that are already successful and recognized rated as 'good' vs not-yet successful rated as 'unacceptable', what's the point of the rating ? :)

    Would you rate commons-io, struts and jboss with the exact same metrics ?
  11. IMHO that a good open source solution anticipates problems, plans and implements enhancements much before actual requests come in. Tomcat for e.g. is one of the best tools I have ever used. IMHO that the development team anticipates future requirements pretty well (however cluster support was an exception). For e.g. its JMX hooks are great and was ahead of the request curve.

    - Saju Thomas
    Ishi Systems