Software Metrics Don't Kill Projects, Moronic Managers Kill....

Discussions

News: Software Metrics Don't Kill Projects, Moronic Managers Kill....

  1. Jared Richardson posted a link to "Software Metrics Don't Kill Projects, Moronic Managers Kill Projects," from Alberto Savoia. It's a post generated from a conversation between Alberto, Joel Spolsky, and Eric Sink.
    Joel is not a big fan of software metrics in general. He is concerned that developers might end up writing code and allocating their time to satisfy a specific metric rather than writing the best possible code and allocating the time based on more important criteria. He narrated a couple of stories about horrific metrics misuse that he witnessed first-hand and was concerned that - in the wrong hands - the CRAP metric could be used in, say, performance reviews: You code is too crappy. You're fired! I understand that there is potential, as well as some evidence, for software metrics misuse; but I don't think that's sufficient reason for avoiding metrics altogether. My reply to Joel was that if an organization/manager is so lazy and stupid to rely exclusively on any given code metric in evaluating programmers, then those programmers are probably better off being fired from that organization anyway. Better yet, the programmers would have great evidence to have the moronic manager fired. While I understand that any tool, technology, or information can be abused by evil people and misused by stupid ones, I don't think we should use "How could this be abused or misused?" as the primary criteria - at least not without first balancing it against the potential benefits. ... What do you all think about software metrics? Aren't you a bit surprised that, despite the fact that software runs the world and that we spend hundreds of billions a year writing and maintaining software, there isn't a single industry-wide metric that's being used with any consistency. Why is that? Are all software metrics inherently evil and useless? Is programming so much more art than science or engineering that it's pointless to try to quantify or evaluate code using objective criteria? What do you think? Do you have any software metric horror/success stories to share?

    Threaded Messages (34)

  2. is building a house art?
  3. Yes[ Go to top ]

    Paul Graham has written quite eloquently on the subject of software as an art. Even though there are formalism and heuristics, metrics will always be just one viewpoint that, like everything else, may be misused/abused. We all write/design software, it comes from our heads (obviously), however, it is one of the least formal of the "engineering" disciplines, and requires a greater amount of creativity to do the work as opposed to, say, chemistry, where certain interactions don't waiver from the norm.
  4. Agreed...[ Go to top ]

    Hi Jin, I agree with you that our industry is one of the least formal, but I think we're also still learning how to measure the work we do. It feels like many areas of our industry have simply ignored metrics completely when some of them can be very useful. btw, here's a link to my original post http://www.6thsenseanalytics.com/blog/posts/1007#more-1007 Jared
  5. is building a house art?
    Could be, should look at my house and the work I am doing. If you love what you are doing it shows.
  6. I think that many of us are too biased when it comes to metrics - this may be rooted from schools, where teachers/professors often repeat that metrics are evil, period. On the other hand many of us don't get much feedback at work. For me, personally, it is the greatest motivator to develop my skills when I see that someone is actually better than me at something. So, I even like seeing my work being measured and benchmarked against colleagues. But, we should not forget that metrics are just another tool in managing software development. Just like any developer abusing Hibernate or Spring, our bosses tend to abuse any absolute number which gives them possibility to benchmark us. And I don't think that bold statements like "Metrics are evil" are of any help! In my opinion, we should improve situation by helping managers to understand how to use metrics wisely rather than taking this tool away from them.
  7. "Moronic managers kill projects" is an example of what I think of as "The Dilbert Myth," the notion that developers are smart and competent but managers are stupid. Since there are far more developers than managers, this is an easy myth to perpetuate. Unfortunately, I've seen far more projects killed by stupid, incompetent developers (who always blame their managers) than by bad management. The biggest sin that managers make is to hire and/or not fire bad developers. Unfortunately the proportion of competent to incompetent people working in software development is quite low, so this is also the biggest challenge.
  8. Metrics + Brains[ Go to top ]

    Metrics are great if you use your BRAIN to read and analyse them. Unfrotunately, metrics are useless without person interpreting the values. Mark http://www.sourcekibitzer.org/Bio.ext?sp=l8
  9. Metrics + Tools[ Go to top ]

    Unfortunately, metrics are useless without person interpreting the values.

    Mark I totally agree with this. Metrics are DEFINATELY a useful process to implement into a feedback loop, indeed every other industry does this quite effectively (and unfortunately we are someway behind). If you don't analyze them and implement them somehow back into the team then it is a waste of time collecting them.
    Also, using tools correctly not to game numbers is important. There is a good example of this here: Hidden Dangers of Code Quality Tools.
    Paul stated "the misguided idea that some how tools can be an effective replacement for genuinely skillful people", he is right but tools do augment these skills and save a LOT of time - some of the things these tools can do automatically would be almost impossible for a person to perform effectively. Rich
  10. Bad measurement reduces quality.[ Go to top ]

    People who have done research on performance measurement show quite clearly that measurement can create problems by itself. If the person being measured is in any way affected by the outcome, they're likely to either lie (subvert the measurment) or only give you only what you measure. Bad measurements create dysfunctions that reduce quality instead of increasing it. The problem is that it's incrediby hard, if not impossible, to create measurements that aren't bad in as complex and individual a context as software development. It's not something that should be left to Mr. J. Random Manager. /L
  11. People who have done research on performance measurement show quite clearly that measurement can create problems by itself.

    If the person being measured is in any way affected by the outcome, they're likely to either lie (subvert the measurment) or only give you only what you measure.

    Bad measurements create dysfunctions that reduce quality instead of increasing it. The problem is that it's incrediby hard, if not impossible, to create measurements that aren't bad in as complex and individual a context as software development. It's not something that should be left to Mr. J. Random Manager.

    /L
    The problem is that most people confuse metrics and ratings, or they want to turn metrics into ratings. A metric should tell you something useful about your project, process, and/or product. That doesn't have to be good or bad. It just has to be useful.
  12. People who have done research on performance measurement show quite clearly that measurement can create problems by itself.

    If the person being measured is in any way affected by the outcome, they're likely to either lie (subvert the measurment) or only give you only what you measure.

    Bad measurements create dysfunctions that reduce quality instead of increasing it. The problem is that it's incrediby hard, if not impossible, to create measurements that aren't bad in as complex and individual a context as software development. It's not something that should be left to Mr. J. Random Manager.

    /L
    Correct. I've encountered that myself more than once. "Performance" or "Quality" (always spelled with a capital of course) is going to be measured using method X, and suddenly the focus of the organisation shifts towards getting the highest possible score on X, to the neglect of everything else. One project the metric was LOC (lines of code) count per set period. As a result there was no refactoring, dead code was never removed, copy/paste programming was encouraged. Why? Refactoring creates usually less verbose code, thus less lines. Removing dead code actually removes lines, leading to a negative LOC count which has to be made up for by producing more new code (and now in less time because you spent that time removing dead code). Copy/paste programming almost by definition produces more code than properly structuring things into modules with methods that are called repeatedly, thus you're (on paper) more productive that way. It ended when the team I was a part of decided to go ahead anyway and remove reams of dead code. We ended up removing something like 50.000 lines of dead code over a 3 months period, writing something like 20.000 lines of new or changed code in the same period. We got an official reprimand for having a negatice LOC count over the period. When it was explained why that count was negative the house of cards built around that metric started to fall apart. I left before something new was put in place, but there was talk of FP counting ("function point") as the new metric (which would of course lead to things being split up into units as narrow as possible to increase the number of FPs implemented).
  13. When assessing quality of somebody's else's project, metrics and tools like Checkstyle are of great value. Clearly, I have to get rid of all the indentation warnings to see something valuable, but then I can see some figures about the underlying code. An those are often quite useful (like finding more that 8.000 uncatched exceptions ... this tells something). Of course, those are my metrics, which I read at my level, and I can read what's behind it. If I am to present results to a manager: I'll use a different metric set, at a different level of abstraction. Something he can deal with effectively. Measuring a project from established metrics alwais fail. Metrics csn point you to the problem but they are just projections of the problem, not the problem.
  14. I can't speak for other organizations, but I personally have a very high incentive to evaluate developers accurately. Their abilities have a very direct impact on my income. It's to my benefit to evaluate them accurately. I don't happen to use metrics. Instead, I sit among the developers, I read every Subversion check-in message that comes through, I listen to them talk during the day, I use our software every day, I regularly watch our technical support tickets to see where our software may have bugs to get a feel for the quality of the developers' output, I talk to the team lead regularly to see who's doing well and who could improve, etc. All of that helps me develop an innate sense of where each developer stands. If someone applied some metrics to our development team, I would certainly look at them, and then of course, I'd mentally cross-check that with my own impressions of how each developer is doing. Cheers, David Flux - Java Job Scheduler. File Transfer. Workflow. BPM.
  15. moronic manager[ Go to top ]

    Yes, we all know 'moronic managers' get fired. ;-)
  16. I didn't read the article, but when I smell QA and metrics I always read this with a lot of pleasure: http://www.cs.wustl.edu/~schmidt/editorial-5.html Guido
  17. The point of metrics is to help you gauge how well you are doing in terms of some goal or objective. So what goal or objective are we creating our metrics for? A key problem we face is measuring quality because the metrics of production are meaningless or even counter productive if we can’t assess quality. Then there are tradeoffs such as: do you want the fastest or the best? What is the trade-off between time to market and quality? What are the metrics of quality? Can they be measured and assessed? A number of quality attributes don’t pay off until well into the life-cycle. We may not recognize differences in quality until we have serious maintenance to do years after the original system was developed. When comparing two developers or their work I feel I can evaluate them relative to each other especially if they are doing the same kinds of work. I believe my assessment would be objective insofar as their persons are concerned. However, my assessment of what is better or worse is based on my experience and point of view. That isn’t something I can put many metrics around and the metrics I can come up with are not absolute. That is, sometimes doing the opposite of what the metric defines as “good” is best course of action. Don’t we face this problem in health care? Is the measure of a doctor number of patients he sees or the survival rates of his patients? If the doctor is seeing only healthy patients it isn’t. A good doctor may not do as good a job of relieving the patient of symptoms because that would be counter to what is best for their long term health. Would you prefer the doctor who made you feel better right away at the cost of long-term debilitating side effects that show up a few months or years later? Long term health or short term relief – which is it to be? In IT the best metrics are those that are tied to the performance of the business. The Federal Enterprise Architecture Framework (FEAF) has a Performance Reference Model (PRM) that is used to establish such metrics. The point is that your end goal is how well your agency or company is performing its function so the important IT metrics are those that contribute to those business metrics. These are group metrics, not individual metrics. I don’t care how smart a manager is, if you have objective metrics and still the manager an opportunity to make subjective judgments (as the metric advocates recommend) based on “common sense” (also subjective) then you end up with a situation where metrics are used only when you agree with the conclusions they point to. So why bother?
  18. The point of metrics is to help you gauge how well you are doing in terms of some goal or objective. So what goal or objective are we creating our metrics for? A key problem we face is measuring quality because the metrics of production are meaningless or even counter productive if we can’t assess quality.
    I totally dissagree! Measuring quality is not the only thing important about software development. On my opinion, importance of quality is very much overestimated by developers. Sometimes following factors are much more important in software development projects and teams: speed or on-time delivery, ability of developer to work in teams, knowledge and expertise gained by team during the project. And there are more apsect you can think off, if you do it out of the box. We should try to estimate and measure other factors than quality that are also important for different stakeholders in software development. Mark http://www.sourcekibitzer.org/Bio.ext?sp=l8
  19. The point of metrics is to help you gauge how well you are doing in terms of some goal or objective. So what goal or objective are we creating our metrics for? A key problem we face is measuring quality because the metrics of production are meaningless or even counter productive if we can’t assess quality.


    I totally dissagree! Measuring quality is not the only thing important about software development. On my opinion, importance of quality is very much overestimated by developers.

    Sometimes following factors are much more important in software development projects and teams: speed or on-time delivery, ability of developer to work in teams, knowledge and expertise gained by team during the project. And there are more apsect you can think off, if you do it out of the box.

    We should try to estimate and measure other factors than quality that are also important for different stakeholders in software development.

    Mark
    http://www.sourcekibitzer.org/Bio.ext?sp=l8
    Quality is a so abused term.... Talking about metrics, I have seen a C module with 99% coverage reported by logiscope that was totally broken. How those smart metrics-driven manager would score those developers ? Guido
  20. Talking about metrics, I have seen a C module with 99% coverage reported by logiscope that was totally broken.
    How those smart metrics-driven manager would score those developers ?

    Guido
    Metrics driven manager would have complementary metrics which values would raise the red flag. Logically, if something is totally broken then it is being rewritten and patched a lot -> hence defect distribution and developer activity distribution among modules would indicate that something is wrong. For the sake of developer evaluation one could check how much and how often code written by particular developer has been rewritten. The idea of one ultimate metric that would tell everybody everything about everyone is totally flawed. One got to look at the things in complex from different perspectives. Software metrics is not a silver bullet, rather a complementary tool.
  21. Software metrics is not a silver bullet, rather a complementary tool.
    +1
  22. I totally dissagree!...

    We should try to estimate and measure other factors than quality that are also important for different stakeholders in software development.

    Mark
    Are you saying we shouldn't measure quality? Are you saying that we should not consider quality when measuring productivity? Are you saying that it is better to product crap fast than to take a little longer to produce something that works well? If so, are you saying that it is always better to produce crap fast? Unless your answers are all yes, then you don't "totally disagree".
  23. So what goal or objective are we creating our metrics for?
    Great question.
    A key problem we face is measuring quality because the metrics of production are meaningless or even counter productive if we can’t assess quality.
    I'm not sure how you arrived here though... metrics can be used for many things, not just quality. And various metrics can be used to help you understand other quality numbers... but sometimes they stand alone. Here are a few examples of things you can use the 6th Sense tools (my employer) to measure: http://www.6thsenseanalytics.com/solutions/use-cases/ I don't advocate the blanket use of metrics Because They Are Good. I'd rather someone learn about the system they are trying to measure, experiment with a few tools, see what's effective, and then use those tools. If it's not effective, move on. Throwing out an entire category of tools for any reason is a mistake, and a sign of dogma, not reasoned thought. Blindly using a category of tools is the same type of mistake.
  24. So what goal or objective are we creating our metrics for?


    Great question.

    A key problem we face is measuring quality because the metrics of production are meaningless or even counter productive if we can’t assess quality.


    I'm not sure how you arrived here though... metrics can be used for many things, not just quality.
    My point is that if you try to measure something like productivity you need to be able to compare apples to apples. Producing more of something or producing it faster is not good unless the quality of what you produced is at least acceptable if not comparable. You need to be able to define and measure quality in order to determine that. Unfortunately, for many important quality attributes that is very difficult to do. Let's consider TCO. That's a life-cycle cost. It incorporates many things that can be reflective of the quality of architecture, design and programming. What metrics successfully predict the life-cycle TCO of software or systems? What metrics successfully predict future bugs and failures? If you invest in full-coverage testing you'll improve your accuracy, but that is not easy. Its also pretty hard to prove that a system inferior in some number of respects will necessarily have a greater or lesser TCO than another. Anyway, my bottom line is that group-level metrics work better than those that target individual performance and that IT metrics that don't contribute to measuring IT's contribution to the objectives of the enterprise it serves generally aren't worth collecting. We (IT) aren't here as an end in itself.
  25. Metrics??????[ Go to top ]

    Metrics are most stupid things in the world of computing. I don't think any software developer get trapped into Stupid manager's metrics. In my last 14 years of work, I have seen a number of these idiots never ever supposed to have a JOB in any field related to mathematics, science and computing engineering, but they are sticking around as managers. Most of the FIRM I visit as part of my job is fortune 500 companies, most of the day when I go back to hotel room, I am someting to think about another stupidity I see from a manager metrics. As Hippies invented most of the Agile products in the world, it is all matter how smart and agile a programmer think, that should be the metrics for evaluation, not some thing a STUPID made up to keep his job and his boss happy. Thanks
  26. Metrics[ Go to top ]

    Today is yesterday's tomorrow.
    +2
  27. Lying Metrics[ Go to top ]

    I've been pondering this exact situation for the last few weeks. I inherited a system back in August and have been looking at the code and the developers and the tech lead trying to figure out how to identify how best to help these guys produce the best system, in the least time, with the most features for the least money. (And so forth...) Last week I brought the code into continuous integration. (It's Java and we're using Maven.) Now I can tell when the code is broken. That's one part of how to evaluate it. Of course, if nobody adds the new code, the build will never break. So I have to add some measure of added code or added features and such into consideration. One interesting thing I noticed when I added a bunch of reports to the build results. CheckStyle, FindBugs and PMD look at the code for various "problems". But, since the code hasn't been looked at in this way before, it reports there are thousands of issues. Now, if we measure the code by number of issues found, it should get smaller if the code is getting better. But all it takes to get rid of a thousand of these is to bring 20 or 30 source files in and and reformat in the IDE. I know that's easy to change. Things like misplaced braces don't have to be reported on. But it becomes a matter of diminishing returns. How many things do I need to ignore to find the really juicy bugs indicators? So I ignore the misplaced braces and such. The code, as a whole, becomes harder to read by the developers and reviewers, especially any new team member. That could lead to flakier code in the long run. But does it really matter? I know it's a hard problem. Its obvious everyone here knows you can't just pop up a number that tells how good a bit of code is. No number tells who is a good developer. It seems to be as much of a problem for managers to be given so much to do that they don't have time to properly evaluate how well things are going. In that case managers are tempted to take the easy route and use those metrics. Finally, another way to look at it. If we think a "good" manager can integrate all the metrics they look at and all the human interactions and all the code reviews and 360 degree ratings and come up with a measure of how well a project is doing or who on the project isn't pulling their weight, can we write an algorithm or optimizer or neural network to do the same thing. I'm asking if we could come up with more complicated metrics, in essence, that would do a good job of evaluation. All we need is a number that does a better job than a "moronic manager" to make a net positive impact.
  28. My project aims to visualize metrics in a weighted and aggregated way, see: http://complexitymap.com. Although somewhat arbitrary, it does help discussion in environments where the only thing managers ask is: "When is it done?" BTW: I'm planning to open source the tool. Cheers, - Mark
  29. As someone pointed out, quality metrics are evaluted by managers. Here moronic or not is unimportant. It is important the role played: preserve their position, provide a justification for their existence and salary. In may experience, these managers once were those technicians sketched in Douglas Schmidt editorial. Where did they gain the necessary skill to evaluate sw quality ? But, maybe, the best question could be: How much is such a manager payed ? Isn't better to save that money to enroll more skilled developers ? They are just like those analysts that call analysis a simple transcript of customer's desires with an addendum telling the applicable technologies (googled around). That they never used. Guido
  30. Shakespeare said it best[ Go to top ]

    "Kill all the managers!" Well, ok, the original Klingon translated better as "lawyers."
  31. In my experience people tend not to understand and therefore misuse metrics. The first thing to bare in mind is that there are two broad category of metrics. Process metrics and results metrics. Process metrics are more about compliance to a given process or standard, where as results metrics are a direct measure of results and effectiveness. Anyone that understands this realises that the metrics that count the most are results. Agile teams use a simple metric known as the teams velocity which is a measure of the amount of software delivered by a team in a fixed time box. Another useful results metric is whether a team can consitently sustain good results. So if a team can maintain a consistently high velocity, then that is an indication that the code is of good quality, well designed and will be easy to maintain in the future. Another useful categorisation of metrics is qualitive versus quantitive. Quantitive metrics such as team velocity tend to get most attention, but qualitive metrics such as whether end users rate the software highly on a scale 1 to 10 say or whether developers enjoy working on the project are as equaly as valuable IMO. It is easy with a little imagination to turn qualitative, subjective feedback into qualiative measures which can be tracked over time. As others have said here, software development is not a deterministic science. In other creative professions there is a tendency to rely mostly on results and qualitative metrics. So a play at the theatre that gets rave reviews by the majority of critics and does well at the box office is naturally deemed a success. I think we need to look to creative professions when it comes to managing and measuring the effectiveness of software development. BTW. Another thing that we would do well to borrow from other creative professions is the idea of hands on mentorship and training by experienced master practioners for young aprentices learning their trade. Like someone else has said, the biggest issue facing the software industry is the general acceptence of mediocracy amongst practicing software professionals, and the misguided idea that some how tools can be an effective replacement for genuinely skillful people. Paul.
  32. So if a team can maintain a consistently high velocity, then that is an indication that the code is of good quality, well designed and will be easy to maintain in the future.
    I wouldn't call velocity a result metric. You're not measuring results, you're doing a simplified version of earned value. Also, you're agile process probably says something about what a "good" velocity is. I think it's a process metric. But I'm quibbling.
  33. So if a team can maintain a consistently high velocity, then that is an indication that the code is of good quality, well designed and will be easy to maintain in the future.


    I wouldn't call velocity a result metric. You're not measuring results, you're doing a simplified version of earned value. Also, you're agile process probably says something about what a "good" velocity is. I think it's a process metric. But I'm quibbling.
    Yes. It depends as what you count as a result. For instance you could argue that the only result metric of interest is the return on investment. In which case the timely delivery of high quality production code is only one contributing factor. I guess that this comes down to the level of maturity of a given organisation. Optimising the whole would tend to point to more business focused metrics. For most organisatons though, monitoring development team velocity is a good start. I agree with your comment on "ratings". What is important is not the absolute value but the trend. A "good veocity" is one that both the team and the customers are satisfied with. My point here was more to do with sustainablility. So if a team can't sustain a given velocity, then I would argue that the velocity is not good for that team, wat ever it is. Paul.
  34. "Metrics? We'll never get off the English system ..." (Anonymous) Kidding aside, the SEI's process maturity model puts metrics at level 4, in other words, walk first, then run. It doesn't make sense to measure if the infrastructure on which you're measuring is like quick sand. Configuration management, project control and oversight are much more important.
  35. I have been in various projects without proper software metrics and in my opinion metrics are still a very much underused tool. One of the reason is that at the end of the day and for most applications, development is driven by deadline and functionality rather than by quality. Functionality can of a course a metric in itself (function points per time or something similar) but quality related metrics with regards to modularity, readability etc. tend to be overlooked. And of course there is the problem of choosing metrics to disguise metrics. A somewhat bizarre tool is to make a bad piece of software look shiny by creating thousands of tests with very high coverage. Looks excellent in the coverage metric, but that does not increase readability, modularity and whatever of the system in any shape or form, where those might be the more important metrics.