Seven Low-Cost Ways to Improve Legacy Code

Discussions

News: Seven Low-Cost Ways to Improve Legacy Code

  1. Seven Low-Cost Ways to Improve Legacy Code (45 messages)

    Robert Simmons, Jr., author of Hardcore Java, has written up seven bullets on how to improve legacy code. The suggestions are all over the map

    1. Use a Stronger Compiler for Your Code
    2. Use a Code Formatter to Reformat Your Code
    3. Introduce final All Over Your Code
    4. Remove Commented-Out Code
    5. Refactor Classes to Remove Anonymous Classes
    6. Replace Listeners with Weak Listeners
    7. Replace Integer Constants with Constant Objects or Enums
    These seven techniques are fairly low cost in terms of man hours, and all are cost-free in terms of equipment and software. They can all be performed by hand or with freely available software. Although they represent only the tip of the iceberg of code-quality improvement, they offer a good place to start when time is tight and quality demands are high. Given the tools at the disposal of software engineers, there should be no excuse for writing code that is anything other than solid as stone. This will allow you to concentrate less on finding little annoying typos and more on the business your customers or employers use to make money. They will be happier with your code, and you will have to work less to accomplish the same tasks.
    Read Seven Low-Cost Ways to Improve Legacy Code

    Threaded Messages (45)

  2. Notice all of the places final is used in this code. It makes the code unbelievably solid.
    He-he-he... This makes the code really unbelievably solid. I.e. I won't believe it's solid. Because even if a List is declared as final, you can still modify its content at will. There's no C++ 'const' replacement in Java.
  3. Final Lists[ Go to top ]


    > Because even if a List is declared as final,
    > you can still modify its content at will. There's no C++ 'const'
    > replacement in Java.
    >

    Try java.util.Collections.unmodifiableList(List l) I use it all the time.
  4. aholes are like opinions...[ Go to top ]

    The article disagrees with some opinions I've read elsewhere. Many of these "idioms" people come up with and try and pass on as their own are often disagreed with by other "experts."

    I agreed with most, except the use of final. I use final for constants and static methods, but I HATE seeing final all over the place like in method signatuers - it is SO distracting.
  5. I think you misinterpret the person, if we use final to an object means we cannot dereference the object, although we can change its properties or its contents like the example List. We use final so NOT to accidentally change the reference of this object and point it to another reference, thats what makes it solid. ;-)
  6. Because even if a List is declared as final, you can still modify its content at will.
    Not if you wrap it with Collections.unmodifiableList(java.util.List). If the code has it's own mutable objects, you can write similar wrappers for them if/when needed. There is no and shouldn't be a 'const' in Java. Trying to write C/C++ in Java is both lazy and stupid. Every programmer should know the "philosophy" and idioms of the language before writing "production quality" code with it.
  7. good, bad and ugly[ Go to top ]

    This article about the same advice 'quality' as JDJ security article.
    I partially agree :
    1) use stronger compiler.
    Today this may not be job of the compilers, but tools for formatting and checker tools. Your bashed JBuilder shows unused variables.
    2) use code formatter.
    4) Remove Commented-Out Code
    When code reaches maintenance stage, this is one of the things to do.

    Bad
    3) introduce final over your code.
    There is places where final should be, but things that just happen to not to be overriden just now, but can be, does not have to be final. If this advice is used it can create code maintenance nightmare as well.
    This advice can be used to find unintended data changes, but in my opinion jus not worth it.
    5) Refactor Classes to Remove Anonymous Class
    Dont do that. If you need anonymous classes use them. Never replace anonymous classes with lot of small top level classes.

    Ugly
    6) Replace Listeners with Weak Listeners
    This advice should be called: avoid circular reference bug.

    7) Replace Integer constants with constant objects or enums.
    Shown pattern for creating type safer enums can be used, but enums are not the ony thing integer constants are used for. Why you should replace constant
    like :
    public final static _CACHE_SIZE = 10000;

    Linards
  8. Agree[ Go to top ]

    I agree with Linards!
  9. typo[ Go to top ]

    Why you should replace constantlike :public final static _CACHE_SIZE = 10000;Linards
    i think you wanted to say
    public static final int _CACHE_SIZE = 10000;
  10. Incidentally misleading[ Go to top ]

    It's not so much the main points of the article that make me go "eurgh" (though one or two of the main points are questionable) - it's the author's ability to get things incidentally wrong in passing that spoils what would otherwise be a mediocre but well meaning article. For example:
     
    "...creation of circular reference trees that pin objects in memory and interfere with garbage collection."

    The main point is fair enough though hardly unknown - ie. that holding on to references to objects can lead to "memory leaks". But this has nothing to do with circular references - which in themselves are not a problem for any java garbage collector that I know of. And the pedant in me is forced to point out also that if it has circular refernces then it isn't a "tree", it's a graph.
  11. Classes that should be sub-classed are in the small minority and should be developed with extra care. I tell developers I mentor to mark all classes final unless they explicitly write them to be safely overridden. This includes more extensive javadocing and putting much more thought into the design of the class so it best supports sub-classing. Is there a downside? I can't see one. If you later discover you want a class to be sub-classed:

    If you own the code:
    Remove the final modifier, refactor the class to be subclass friendly, improve the javadoc, and subclass away.

    If you don't own the code:
    Encapsulation is your buddy and 90% of the time the better way to go. "is a" relationships are far more rare than "has a" relationships, which is why I say that classes that should be sub-classed are also rare.

    Obviously, for trivial examples this isn't as much as an issue, but for real programs it really can improve the quality of the code base.

    Yes, putting final on a parameter, field, or variable won't keep it from being modified (if it is not a primitive or an immutable object) but will keep that variable from being reassigned, which I think is the author's point. It reduces logic errors because many of them will be caught by the compiler.
  12. Is final, Is good[ Go to top ]

    Classes that should be sub-classed are in the small minority and should be developed with extra care. I tell developers I mentor to mark all classes final unless they explicitly write them to be safely overridden.
    Agree. This is one C++ tweak where Java got it seriously wrong. The default should be final; the coder of the would-be superclass should have to justify why anyone would need to subclass it, as opposed to just use it.
  13. Is final, Is good[ Go to top ]

    Agree. This is one C++ tweak where Java got it seriously wrong. The default should be final; the coder of the would-be superclass should have to justify why anyone would need to subclass it, as opposed to just use it.
    Absolutely disagree. Using "final" is yet another way that one programmer arrogantly pretends to know the future. The "final" keyword, like any other, is a tool. Used poorly (like when you arbitrarily put it on everything) it is as destructive as any other thing taken to an extreme, such as:
    I think all classes, methods and fields should be marked as private.
    Peace,

    Cameron Purdy
    Tangosol, Inc.
    Coherence: Clustered JCache for Grid Computing!
  14. Disagreement[ Go to top ]

    Like I said... disagreement.
  15. Is final, Is good[ Go to top ]

    Using "final" is yet another way that one programmer arrogantly pretends to know the future.
    That's silly. Final endows a program with invariants that enhance quality. I appreciate that IntelliJ warns me when I've failed to declare some things final.
  16. Is final, Is good[ Go to top ]

    Cameron: Using "final" is yet another way that one programmer arrogantly pretends to know the future.

    Brian: That's silly. Final endows a program with invariants that enhance quality. I appreciate that IntelliJ warns me when I've failed to declare some things final.
    If you're thinking of "a program," then I can understand why you would appreciate the use of the "final" keyword for methods and classes, since it works well for a single developer on a single application with one-off code.

    When you're working on software that gets re-used (quite often in binary form -- not your very own "refactorable source" that you can fork on whim) for years and years by many different projects and developers, limiting their future options reflects the assumption that you know everything now that will need to be extended ever in the future. Having been bitten by such unwise decisions repeatedly (and with often quite painful effect) I can state with some surety that it is arrogant to assume that you know what can and will be extended in the future, and more importantly, "why" it will be (or should not be) extended.

    I can count the number of valid (IMHO) uses of "final" on one hand. Take java.lang.String as an example, because it is a constant. Maybe something truly security related, as you can see in various core classes such as ClassLoader's defineClass/resolveClass methods. That's about it, and I still have three fingers ;-).

    (And while I vehemently disagree with your technical opinion in this case, I hope it is clear that it's all in good cheer, and not at all intended to be personally disparaging.)

    BTW - it does take effort to build classes that can truly be safely extended without source being available (or even if it is available, assuming it cannot be modified for whatever reason.) However, following a few simple rules can do wonders for the reusability and extensibility of Java classes. Even if you don't follow those rules, though, making the class be final may prevent someone from working around some issue in the future, which is bad karma.

    Peace,

    Cameron Purdy
    Tangosol, Inc.
    Coherence: Clustered JCache for Grid Computing!
  17. Is final, Is good[ Go to top ]

    I can count the number of valid (IMHO) uses of "final" on one hand.
    You never mentioned debugging. If I inspect the value of a local variable at run time, and then execute a few more statements, how do I know what the variable's current value is? If it was declared final, then I know its value without inspecting it. The same is true of fields, method parameters, and caught parameters. Final boosts a data declaration's quality and readability. Final is more expressive.

    Final data is good in other ways. Eg, when an inner class wants to access a local variable of the enclosing scope. The immutable pattern is best implemented with constructor-assigned final fields.

    When it comes to data, I declare it final by default, unless I have a special need for mutability. When you read my code, you recognize a genuine need for mutability when you don't see the 'final' keyword. Whereas, when I read (hypotheticly) your code, I'm unable to identify mutability patterns without first tediously analyzing it. That's a maintenance issue.
  18. Hi Brian,
    Final boosts a data declaration's quality and readability.
    Actually, I'm in agreement with you, although I don't take it to the same extent that you do. In the case of fields, the final keyword (polymorphic like many keywords in Java) is used to actually mean "const" and it is useful.

    That's why I said "classes and methods" in which case final means "can't do something in a sub-class with me." As in, "can't AOP me" and "can't override me" etc. so instead of being "useful" it is "anti-useful" ;-)

    Peace,

    Cameron Purdy
    Tangosol, Inc.
    Coherence: Clustered JCache for Grid Computing!
  19. Is final, Is good[ Go to top ]

    When you're working on software that gets re-used (quite often in binary form -- not your very own "refactorable source" that you can fork on whim) for years and years by many different projects and developers, limiting their future options reflects the assumption that you know everything now that will need to be extended ever in the future.
    When you're working on such software, you're constantly making decisions about what will be useful and what won't. For something as important as extension, it is better to explicitly declare that you meant for it to be extended.
    I can count the number of valid (IMHO) uses of "final" on one hand. Take java.lang.String as an example, because it is a constant.
    Meaning immutable? But if that's a good reason, then it would apply to many classes.
    it does take effort to build classes that can truly be safely extended without source being available (or even if it is available, assuming it cannot be modified for whatever reason.) However, following a few simple rules can do wonders for the reusability and extensibility of Java classes.
    Perhaps, but I don't see why it's always worthwhile. By making it extensible, you're adding an extra usage pattern which might not have any great use over other intended patterns. In any event, if the developer is talented enough, and the effort is justified, then it's hardly arduous to add a "final" keyword. In most cases, I think this would be in the minority.
    Even if you don't follow those rules, though, making the class be final may prevent someone from working around some issue in the future, which is bad karma.
    Clearly there's no great need to do this if you do control the superclass - you're adding a constraint which can easily be removed if the justification arises. And when the justification does arise, having to remove the keyword might prompt you to think about any refactoring that should happen to make it truly extensible.

    The argument for "final" also applies to published code where you can't control how it's used. Binary compatibility becomes a problem if you use "final", but equally troublesome is your ability to alter the superclass when you don't know if/who/how/why users have extended it. That's a big price to pay for a class that never seemed to have a case for extension.

    As a user of any typical library, I'm extremely cautious about extending classes because it can make my code fragile - for any given class, I'd like to know what the developer's plans are in the future.

    Put simply, Java's default of "extensible" is not compatible with the reality that extensibility is a priviledge, not a right. You have to explicitly design for it, so you should have to explciitly declare it.
  20. final is a dead end[ Go to top ]

    Michael: When you're working on such software, you're constantly making decisions about what will be useful and what won't. For something as important as extension, it is better to explicitly declare that you meant for it to be extended.

    Maybe I work in a completely different universe, but "something as important as extension" is so commonplace as to be the rule, not the exception.

    Michael: Meaning immutable? But if that's a good reason, then it would apply to many classes.

    Yes, that is what I meant. Again, I think I am living in a different universe, because I don't see that many immutable classes, and the ones that should be (e.g. Date) aren't! ;-)

    Michael: Perhaps, but I don't see why it's always worthwhile. By making it extensible, you're adding an extra usage pattern which might not have any great use over other intended patterns.

    Actually, if one follows general Javabean-style guidelines for development, about the only thing "extra" that one has to do to support inheritance well is to use factory methods for instantiating inner classes (since Java doesn't have a virtual "new" capability yet.)

    Of course, most coders don't follow Javabean-style development patterns, since willy-nilly access to fields saves a couple of key-strokes (and five years ago it even made a performance difference.)

    .. you're adding a constraint [the final keyword] which can easily be removed if the justification arises. And when the justification does arise, having to remove the keyword might prompt you to think about any refactoring that should happen to make it truly extensible.

    You keep saying "you" as if products are built and maintained by one person, and again you have this concept that there's one copy of the source used in one project.

    When you have to modify pre-existing source to accomplish something, that is brittle. When it is source that is shared by half a dozen products, some of which you may not even know about, having to change source is a huge risk. Further, you may not have source, or you may not have the rights to change it.

    This "XP" mentality has gone way too far, that you have this little code base that springs from nothing and has no interdependencies with the rest of reality. Since people aren't grokking this, I am starting to feel like "the man crying out in the wilderness" ... so maybe it's just me ;-)

    OTOH, this week I had to extend a class that I do not "own" the source for (i.e. I can not change the source,) and that was not designed to be extended. Thankfully, it was not final, otherwise I would be USC-P.

    Peace,

    Cameron Purdy
    Tangosol, Inc.
    Coherence: Clustered JCache for Grid Computing!
  21. final should be default[ Go to top ]

    Again, I think I am living in a different universe, because I don't see that many immutable classes, and the ones that should be (e.g. Date) aren't! ;-)
    Cameron, we're apparently in the same universe because I don't see many immutables either. However, my ideal universe would contain a much greater proportion of them (for reasons I won't mention here as it's a side issue).
    Michael: Perhaps, but I don't see why it's always worthwhile. By making it extensible, you're adding an extra usage pattern which might not have any great use over other intended patterns.

    Cameron: Actually, if one follows general Javabean-style guidelines for development, about the only thing "extra" that one has to do to support inheritance well is to use factory methods for instantiating inner classes (since Java doesn't have a virtual "new" capability yet.)
    I need to clarify what I said: A class should be designed with certain patterns in mind regarding how it will be used. Sometimes (e.g. AbstractList), the pattern is that it should be extended. Other times (e.g. ArrayList), the pattern is that it should be referenced. It's easy enough to make it extensible, but why add the possibility if there is no good reason? One common answer is because you can't envision all usage scenarios, so it's better to throw it out there and see if it's useful. Well, maybe that sort of experiment is worthwhile in some cases. But any other experiment, such as adding a method for more flexibility, would require some thinking and a declaration. So I'd say the same here - if you're thinking it's maybe useful to extend it, at least you should have to declare "final" to ensure you've thought about it. (That's not the only argument in favour of "final" - the more important one concerns about how easy it really is to make the class extensible.)
    You keep saying "you" as if products are built and maintained by one person, and again you have this concept that there's one copy of the source used in one project.When you have to modify pre-existing source to accomplish something, that is brittle. When it is source that is shared by half a dozen products, some of which you may not even know about, having to change source is a huge risk. Further, you may not have source, or you may not have the rights to change it.
    True. Code ownership is far from binary, as my distinction implied it to be. My argument for "final" nevertheless applies across the spectrum.
    OTOH, this week I had to extend a class that I do not "own" the source for (i.e. I can not change the source,) and that was not designed to be extended. Thankfully, it was not final, otherwise I would be USC-P.
    Why did you *have to* extend it? You only have to extend it is if you can't change the calling code, and the calling code expects the superclass. If that's the case, it's an interface-implementation issue again; the original coder could have provided an interface to prevent this from happening and calling code should be talking to the interface, not the superclass implementation.

    More likely, people *want to* extend because the calling code uses something similar but more specialised. If it's been designed for extension, then that's the right thing to do. If not, it would be better to create a service-oriented interface to support your calling code (even if it's only imaginary code because you're creating a component for reuse), and implement the interface using the original (not owned) class.

    So it comes down to knowing, for a given class, "was this class designed for extension"? And my point is that Java makes it hard to answer that question. The language default (extensible) does not match the typical situation (final). If developers used "final" properly, they'd use it on almost every class, which implies a problem with the language. (Albeit a minor problem in the context of our collective Java universe :-).
  22. final should be default[ Go to top ]

    Michael: Why did you *have to* extend it? You only have to extend it is if you can't change the calling code, and the calling code expects the superclass. If that's the case, it's an interface-implementation issue again; the original coder could have provided an interface to prevent this from happening and calling code should be talking to the interface, not the superclass implementation.

    Yup, that's exactly what it is. The software was written by another company (that you have heard of, no doubt) and I have no way to use or modify their sources.

    Michael: More likely, people *want to* extend because the calling code uses something similar but more specialised.

    Normally, its the superclass that is less specialized. (Are you British? I though that Microsoft Office's spell-checker would have fixed all those "s" instead of "z" spelling errors. ;-)

    Michael: If it's been designed for extension, then that's the right thing to do.

    Nope. It's a bitch, actually. However, at least it's not marked final, or their customers would be screwed.

    Michael: So it comes down to knowing, for a given class, "was this class designed for extension"?

    No, it comes down to knowing that a given class should explicitly NOT be extended a la String ;-)

    One of the biggest mistakes in C++ was making "virtual" an optional keyword. That -- in and of itself -- is enough of a reason to never ever look at C++ again. Introducing "final" as a coding default (implicit or explicit) would be making the same mistake all over again. (Random thought: Hey, isn't that what .NET did with "C#"?!?)

    Peace,

    Cameron Purdy
    Tangosol, Inc.
    Coherence: Clustered JCache for Grid Computing!
  23. Using "final" is yet another way that one programmer arrogantly pretends to know the future. The "final" keyword, like any other, is a tool.
    A underused tool by far.

    I stand by my original post. I'd rather use the code base of the "arrogant" programmer who spent time writing the few classes that need to be "extended" than the "sloppy" programmer who left everything wide open.

    Can you give me one example of where marking a class as final limits a future unknown use of the code base? I can distribute my code far and wide and if anyone wants to use my final classes they can encapsulate them in their own class.

    Downside? Maybe a little more work writing a wrapper than simply choosing extends but that is the whining of the "sloppy" programmer.

    Upside? Pick up OO 101 to learn the benefits of encapsulation over extension when it comes to dependencies and decoupling.
  24. George: Upside? Pick up OO 101 to learn the benefits of encapsulation over extension when it comes to dependencies and decoupling.

    Whatever perfect coding world you got to live in, get me a ticket there as fast as you can ;-)

    When encapsulation stops throwing a ClassCastException in someone else's code, then I'll relax my stance.

    Peace,

    Cameron Purdy
    Tangosol, Inc.
    Coherence: Clustered JCache for Grid Computing!
  25. Refactor Classes to Remove Anonymous Classes
    That's definitely a mistake. Robert misses a very important maintainability aspect: information hiding. Usually an anonumous class is declared within a method. Presumably Robert wants to hoist the inner class up to a wider lexical scope. That obscures that the class is only used locally. Such obfuscation of scope impedes maintenance. It's generally sinful to pollute a wider name space unnecessarily.
  26. Bastard classes[ Go to top ]

    I checked the author's book, "Hard Core Java", and he elaborates in Chapter 6 on why doesn't like Anonymous Inner Classes:

    1) Difficult to read.
    2) Can be replaced by monotliths that implement listener interfaces.
    3) Harder to debug.
    4) Violates OO principles since classes are meant for reusability and AIC's can't be reused.

    I agree with points 1 and 4. The problem with monoliths or just implementing the listener interfaces in the outer class is that you have to go through the drudgery of implementing ALL the methods in the interfaces since you can't multiple-inherit adapter classes.

    Personally, I think a more fitting name for anonymous inner classes is "bastard classes", because they came about as an ugly preprocessor hack (not at the bytecode level) by Sun just prove Microsoft wrong on the delegates issue. Sun should've taken the delegate concept from Microsoft back then and implemented it instead of that bastard inner-class contraption, regardless of the political issues involved back then (i.e. MS was trying to sabotage Java's class platform goal). The delegate model in C# is much cleaner and more readable than Java's bastard classes.
  27. Bastard classes[ Go to top ]

    I checked the author's book, "Hard Core Java", and he elaborates in Chapter 6 on why doesn't like Anonymous Inner Classes:1) Difficult to read.2) Can be replaced by monotliths that implement listener interfaces.3) Harder to debug.4) Violates OO principles since classes are meant for reusability and AIC's can't be reused.I agree with points 1 and 4.
    I disagree with point 4. One of the OO principles is encapsulation, and using anonymous inner classes improves encapsulation. Remove them, and now you have two separate classes that are invisibly coupled.

    --
    Cedric
  28. Bastard classes[ Go to top ]

    One of the OO principles is encapsulation, and using anonymous inner classes improves encapsulation. Remove them, and now you have two separate classes that are invisibly coupled.
    Ultimately, you want to encapsulate the implementation of the event handler. The best way to do this is to define it as a method in the outer class, have that class implement the listener interface, and pass "this" to the addWhateverListener() method.

    Alas, doing this will force you to define dummy methods for all events you don't want to handle. The solution is to use adapter classes and override only what you're interested in handling. But since Java doesn't allow multiple inheritance, you're restricted to one adapter class per outer class.

    The quick and dirty solution to solve this dilemma was to define a class "on the fly", have that class extend an adapter class, and override your event handler in it.

    Consquently, the anonymous class is essentially an unnecessary artifact, a side-effect of the solution. It's an additional clutter on your disk "OuterClass$N.class" and in bytecode code.

    So having to encapsulate this artifact class is a solution to a problem we created in the first place by using bastard classes to solve the "function pointer" dilemma. And I still believe that delegates are a cleaner solution.
  29. Bastard Classes: Missing the point[ Go to top ]

    Ultimately, you want to encapsulate the implementation of the event handler. The best way to do this is to define it as a method in the outer class, have that class implement the listener interface, and pass "this" to the addWhateverListener() method.
    You're missing the point here. About 99% of the time, the fact that your class is "listening to something" shouldn't be public knowledge - i.e. part of your class's public interface. Anonymous classes are a powerful way of encapsulating that "listening" logic privately within a class.

    As far as comparisons with delegates go, I prefer the capability of an anonymous inner class to maintain encapsulated state. IIRC, the only way you can do this with a delegate is to push that state back into its parent object. Yuk.

    I won't argue that the syntax for anonymous inner classes is somewhat of a bear. I know Gilad Bracha has been thinking about it.

    God bless,
    -Toby Reyelts
  30. Bastard Classes: Missing the point[ Go to top ]

    If my class is representing a UI component, then why would I want to conceal that it can listen to events related to that component? Sometimes this is quite handy because I can simulate the event manually by calling the handler directly on an instance of that class. If I encapsulated the listening in an anonymous inner class, there's no way I can access that event handler, unless I store a reference to that inner class object and create a public method that delegates the call to the inner class, which is more trouble than I need.

    Anyway, this is a moot point, because we know it's not practical to implement the listener in the outer class.

    My point here is that bastard classes are just an ugly solution to a simple problem and a syntactic hack that seems at odd with the Java's elegance.

    Anyway, the subject of bastard classes vs. delegates is an unending debate, even JG disagrees with me, so I guess it's probably a personal preference after all.
  31. Bastard classes[ Go to top ]

    Ultimately, you want to encapsulate the implementation of the event handler. The best way to do this is to define it as a method in the outer class, have that class implement the listener interface, and pass "this" to the addWhateverListener() method.
    That's neither encapsulation nor object oriented. Encapsulation has information hiding, whereas you do the opposite by further publicizing an internal implementation detail. Your coding style pollutes the global namespace, and that makes maintenance harder.
  32. Bastard classes[ Go to top ]

    It depends on what your class is representing. Suppose I'm handling event X in SomeWidget, and a set of rules were met that AnotherWidget should behave as if it received event Y, then I can just "send that message" or simulate that event manually by calling objAnotherWidget.handleEventY(). This allows you to simulate events to any listener object, and it makes testing much easier.

    In other words, the fact that this object can listen to events is part of its contract. I don't see what violates OO principles in that.
  33. final point[ Go to top ]

    I've heard the argument about using final all over the place before, but to me this makes for ugly/confusing code. If it's really a good idea, then it suggests a problem with Java. After looking through my code, I discovered MOST of the things in my code could be made final. If that's true, why should Java default to things not being final? Perhaps Java should make all things final by default and have a new "var" keyword if you really want to make something variable.

    Using the final keyword so frequently suggests a problem with the language or that it shouldn't be used so frequently IMO.
  34. final point[ Go to top ]

     After looking through my code, I discovered MOST of the things in my code could be made final.
    OK, I'll bite: what would you actually gain by making "MOST of the things in my could" final? What would be the concrete benefit of doing so? What _real_ benefits? No hypotheticals, no "some bad programmer might screw up using inheritance" - give a concrete way in which final improves "MOST of the things" in your code.

        -Mike
  35. When Final is useful[ Go to top ]

    Final is often useful when you're dealing with an inheritence hierarchy, and specifically a hierarchy where superclasses capture a good deal of workflow via template methods. It makes good sense to define the workflow via final methods to guarantee they cannot be overridden, yet still allow subclasses to add in custom logic at specific points within that workflow (hooks) that are cleary documented as appropriate for overriding.
  36. When Final is useful[ Go to top ]

    The template pattern is used for design, not to protect your code. You are not protecting the user from "messing things up". You are coding a process in one place - you define the steps of a process that subclasses are responsible for implementing. The *process* is final. This is different from what the author was talking about.

    For me, *not* using final has never bitten me as for as I can remember. Changes made to mutable object such as collections and Dates made outside the owning class *has* bitten me. For me, final doesn't buy me a lot in most cases - I guess I agree with Mike here. I especially think the authors example was way overkill. Making the Iterator in a for loop final. Please! I hope the was exaggeration for effect, but I think he was serious.

    Ryan
  37. When Final is useful[ Go to top ]

    The *process* is final.
    I agree with you, you actually made my point. The process *is* final, so make the template methods that define the process *final* so subclasses can't mess with it.
  38. final point[ Go to top ]

    Consider this:

    class X {

    private Object f;

    public X(Object f) {
    f = f; // which f is being modified?
    }

    }

    class Y {

    Object f;

    public Y(final Object f) {
    f = f; // will not compile
    }
    }
  39. final point[ Go to top ]

    No need to use final--Eclipse will alert me of that one. :)

    Just by looking at that code, it's obvious you're asking for trouble. Try some different names or at least use "this".
  40. final point[ Go to top ]

    "Just by looking at that code, it's obvious you're asking for trouble. Try some different names or at least use "this". "


    The point was that by making the param final
    f = f will not compile while this.f = f will and will give the (most likely) intended results.

    Using final can and has caught potential errors.
  41. Lovely Discussion[ Go to top ]

    I have read through this thread and I am enjoying seeing all of the discussion going on about fundamental issues. As senior developers we often get into a rut and take things for granted.

    As for final, Im of two minds on using it on classes. Both of the groups are right to an extent. You should consider carefully whether you think that a class should be extensible. If not then it should be final. Otherwise it shouldnt be.

    When I talk about using final I am referring more to the usage of it within method code such as I show in my examples. Sure, you can say that the user just shouldnt name things the same way in setters as in attributes but what if one day you are sleepy and leave off the prefix to your variable by accident or accidentally forget to type this. Then final will help you out. Its there to push logic errors into compiler errors. Like a condom, you may not need it but its better to be safe than sorry. Next time you spend 5 hours to fidn a logic bug only to see that it is something trivial, look and see if final could have helped.

    Personally I dislike prefixing variables. That is just personal taste. Since I use final like a maniac, its no problem.

    But please do continue on. I find the final class versus non final class discussion to be fascenating.

    As for weak references, to not use them is to ask for trouble. I have not been at a company yet that didnt have reference problems. As for what I mean about circular references, you should check out that part of the book as it makes things much more clear. (And for the pedantic one, I do say that it is a directed graph. =) )

    I enjoy hearing comments on my work. I seek them out and read them. Keep it up. =) =)

    -- Robert
  42. final point[ Go to top ]

    Mike,

    I think you misunderstood my point. I'm against using final all over the place. I was using the fact that most things in my code could use it, as evidence that it doesn't really make sense.

    I offered the option of having a "var" keyword to use if it really was that important, and just make "final" the default instead. But I don't believe it really is that important.
  43. Removing anonymous classes[ Go to top ]

    I think there is a case for removing anonymous classes. The thing that isn't stated explicitly in the list above is what they should be replaced with. IMHO promoting them to private inner classes, especially to private static inner classes where possible, usually leads to more readable code. It also gives you code that works better with modern IDEs (you'll be able to see a meaningful name for the class in the structure tree, instead of "1" for example).

    Reading the article, that's actually what the author suggests. Although he then goes on to shoot himself in the foot by saying "or, better yet, just implement that listener interface yourself".
  44. Removing anonymous classes[ Go to top ]

    IMHO promoting them to private inner classes, especially to private static inner classes where possible, usually leads to more readable code.
    I'll agree the hoisting a class out of a method is more verbose. But how is it more readable?
  45. A few comments here are pretty alarming. At least one poster appears to think that subclassing is something that should be prevented by default. Another admits that using "final" might help but thinks it's distracting to look at. And somebody else seems to think that it's a good idea to first prevent inheritance, and then use encapsulation to get around the lack of inheritance. Oy.

    Using final on method arguments, local variables, and appropriate properties makes immediate sense. It's as simple getting into the habit of writing "if (null == someObj)" instead of "if (someObj == null)" (etc.) so that the compiler will scream and yell when you accidentally use = instead of ==. It may be tedious to add to an existing project retroactively, but that doesn't mean you should reject it for future code. It can only catch fairly stupid mistakes, but those are the ones that look OK until you stare at a method for the 20th time and then slap yourself on the forehead. Why not adopt this?

    On the other hand, final methods and classes are generally a bad idea. You should *definitely* have to defend any decision to break reuse, such as by declaring a class or method final. I would jump all over this (nearly all classes in an app's source declared as final) if I saw it during a code review. If your class is so fragile that it can't be subclassed, fix it. If it has complex setup requirements, document them, then sanity-check them in your method implementation and throw an IllegalStateException with a description that explains what ought to have been done in the exception message. Just declaring the whole thing final is a cop-out. You're giving the finger to anyone who wants to reuse your class. You might as well take it a step further, and use this constructor:

        public DoNotSubclassMe() {
    if (!this.getClass().equals(DoNotSubclassMe.class) ) {
    System.out.println("Code reuse? Never!");
    System.exit(0);
    }
        }

    Seriously, if you declare your class (or constructor) final, anyone who wants to reuse it will have to write a wrapper class with a similar API, and still won't be able to use polymorphism either since their objects' class won't be assignable to yours. This seems like an outrageous antipattern. Maybe in the land of C++, where there's a huge difference between an application and a library, it makes sense to have to jump through some hoops to define externally visible interfaces, but in Java (and many other languages) every class is a reusable library by default, and defeating that devalues your code tremendously.
  46. Jamie has it spot on.[ Go to top ]

    I must say that I agree with your post. If you download the sample chapter from my book (which I think is still a version with a few errata still in there so be careful) you will see that I am fervently against blocking the future. The times to make methods or classes final are rare and very well defined.


    Jamie: I would be interested to see what you think of the sample chapter.