Michael Feathers: It's time to deprecate final

Discussions

News: Michael Feathers: It's time to deprecate final

  1. Michael Feathers, in a blog entry entitled "It's time to deprecate final," says that "The problem with final (and a couple other language mechanisms like it) is that it is too coarse a tool for what it does. It prevents everyone from subclassing or overriding. What we really need is selective access, or maybe just convention."
    Here's the problem: When you use final pervasively, you make unit testing nearly impossible. Believe me, I know. I encounter enough teams who are in that situation. Have a class A which uses a class B that is final? Well, you'd better like B when you are testing A because it's coming along for the ride. Extract an interface for B? Sure, if you own B. If you don't, you have to wrap it to supply a mock. And.. here's the kicker: if you are going to wrap B, well, what about security? All final does for a class is guarantee that the class itself won't be the point of access, but what about your wrapper? If you use the interface of the wrapper everywhere, again, because you want to test, the developers of B haven't made your software more secure, they've merely pushed the problem onto your plate: they've forced you to choose between testability and security. It's rather interesting to consider that perhaps we truly can have security, but only if we can't really be sure our software works.
    What do you think? Has final outlived its usefulness? Other resources:

    Threaded Messages (86)

  2. Rubbish[ Go to top ]

    Utter garbage. Final stops people creating deep nested class hierarchies that are a bad smell. The key to reuse is NOT inheritance - heritance has a place but too many people use it badly. Final provides a useful way to prevent people writing incomprehensible and hard to maintain code.
  3. Re: Rubbish[ Go to top ]

    I agree with Simon. I'm sure final and sealed may feel awkward to anyone coming from a Smalltalk / Ruby / PHP background; but I think what that comes down to is the "be able to write the most robust code possible" vs. "trust that anyone using your code knows what he's doing (and if not, their loss)" debate. There are strong arguments to be made for either course, but I think it's fairly obvious Java was built towards the former. ERH illustrates this robustness point well in his XOM design principles article, pointing out how the final keyword allows him to write an XML library that will not let you output an invalid XML document, ever. I digress. Michael Feathers' primary argument for ditching the final keyword is that it somehow complicates unit testing. Sadly, he fails to give any example of a piece of Java code that's untestable because of the final modifier and to be honest I can't think of any (valid) piece where this is the case. Until I've seen a convincing example of the contrary, I'm inclined to say that a class untestable due to the final keyword, is the result of a poorly designed class, not of a poorly designed keyword. Rather than to change the programming language, it's the class that should be changed. Like Simon said, subclassing something to test it is a poor use of inheritance.
  4. Re: Rubbish[ Go to top ]

    ...a class untestable due to the final keyword, is the result of a poorly designed class, not of a poorly designed keyword.
    Agreed fully! I'm sure Michael would agree as well. However, when you inherit crappy legacy code and need to bring it under test, you don't have the luxury of just saying that it was poorly designed and walking away from it.
    ...subclassing something to test it is a poor use of inheritance.
    Michael is the author of Working Effectively with Legacy Code (WELC), which is the "bible" for introducing tests to code that currently has none. The book provides many examples of what he is talking about. One of the concepts Michael introduced was that of "seams" in the code. These are places where you can replace or stub functionality in order to make the code easier to test. One such seam technique is to override a method in the constructor of the class under test, e.g. FooRepository fooRepository = new FooRepository() { public Collection retrieveAll () { // this method is now stubbed return new ArrayList(); } } In this example, we want to test a method in FooRepository, but don't want to have to hit the database. We can do this by overriding just the single method above. However, if retrieveAll is declared as final, you can't use this approach. You also wouldn't be able to subclass SomeClass and override the method. Put simply, you'd be stuck calling the database. Of course, had the code been written using TDD, it would never have been an issue. Dave Rooney Mayford Technologies
  5. Re: Rubbish[ Go to top ]

    One such seam technique is to override a method in the constructor of the class under test, e.g.


    FooRepository fooRepository = new FooRepository() {
    public Collection retrieveAll () {
    // this method is now stubbed
    return new ArrayList();
    }
    }

    In this example, we want to test a method in FooRepository, but don't want to have to hit the database. We can do this by overriding just the single method above. However, if retrieveAll is declared as final, you can't use this approach. You also wouldn't be able to subclass SomeClass and override the method. Put simply, you'd be stuck calling the database.

    Of course, had the code been written using TDD, it would never have been an issue.

    Dave Rooney
    Mayford Technologies
    I'm learning Spring and I don't know what TDD stands for. Maybe Dave implicitly said it. This is my opinion: if we have a FooRepository_Interface, FooRepository implements it with a final retrieveAll method, then to test the retrieveAll method without hitting the DB we only need to create a FooRepository_Mock class implementing the FooRepository_Interface. So I think `final' is still good and is unnecessary deprecated.
  6. TDD[ Go to top ]

    TDD == Test-Driven Development. Have a look at the Wikipedia entry. It gives a bit of background, although I don't necessarily agree with the Limitations section. Another good reference is TestDriven.com. Dave Rooney Mayford Technologies
  7. Re: Rubbish Rubbish[ Go to top ]

    Utter garbage. Final stops people creating deep nested class hierarchies that are a bad smell. The key to reuse is NOT inheritance - inheritance has a place but too many people use it badly. Final provides a useful way to prevent people writing incomprehensible and hard to maintain code.
    I respect your opinion but I cringe to the point of pain whenever someone says that some Java feature will save developers from themselves. Java is type-safe and that keeps developers from writing bad code. Java has final and that keeps developers from writing bad code. Java has garbage collection and that keeps developers from writing bad code. It just isn't true. Generics adds type safety to collections and that will keep developers from writing bad code. Java won't let you override operators and that keeps developers from writing bad code. I've seen way too much bad Java to believe that any of these Java sneeze-guards will improve developer code quality. Never underestimate the ingenuity of bad developers. The question when deciding features for any language is: Do you dumb it down to try to meet the demands of the most inexperienced developers or to you open it up for your power users. Both C++ and VB were equally successful but somehow the more powerful language is still expanding into new domains while the other is waning. Since no language could or should do it all you have to pick your target audience. The keyword "final" is essential for locking down primitive variables and immutable objects. It is the only way I know of making a constant in Java. It is less helpful in locking down mutable objects. It is reasonably optional for class/method level and I rarely use it in practice. I prefer to javadoc my methods/classes and give proper warnings where needed. I don't assume the clients of my code are idiots. I assume they are experienced developers who may have some future reason to subclass my code that I can't know about right now. On the other hand I've not yet been hamstrung by a class or method marked as final. Maybe the day will come but I haven't experienced the poster's pain enough to agree that the keyword needs to be removed or enhanced. ________________________ George Coller DevilElephant
  8. Re: Rubbish Rubbish[ Go to top ]

    I respect your opinion but I cringe to the point of pain whenever someone says that some Java feature will save developers from themselves. Java is type-safe and that keeps developers from writing bad code. Java has final and that keeps developers from writing bad code. Java has garbage collection and that keeps developers from writing bad code. It just isn't true. Generics adds type safety to collections and that will keep developers from writing bad code. Java won't let you override operators and that keeps developers from writing bad code
    I agree with your premise entirely especially the part about never underestimating the power of bad developers to make a mess of things. The worst is when these developers think they are being really clever. However, I do think that a language can steer developers in one way or another. Similar to the theory that langage can constrain or enable certain concepts, a programming language that enables a certain idiom very naturally and makes another very unweildy will tend to 'produce' code that favors the more natural idioms. This brings me to another issue with Java (as I see it) it takes too much structural code to create closures. It desperately needs function pointers.
  9. Re: Rubbish Rubbish[ Go to top ]

    I've seen way too much bad Java to believe that any of these Java sneeze-guards will improve developer code quality. Never underestimate the ingenuity of bad developers.
    LOL
    The question when deciding features for any language is: Do you dumb it down to try to meet the demands of the most inexperienced developers or to you open it up for your power users.
    But it is not *only* about protecting users of your API. Just as important is that you should not expose things you don't mean to expose/ don't understand yet. Deliberately not opening up too much is a good way to 'force' discussion on extension points, so that both users and developers can look for the best solution. Of course, there are many ifs and buts, but my experience is that, even though it will annoy API users every once in a while, it is the best way to develop an API which is good for users which is understood well by it's developers. Btw, defending final doesn't mean final should just applied anywhere without bothering about anything else. I believe that when designing an API every time you make a class or method final you should at the same time be thinking in what ways you limit your users and consider to provide extension points and/ or breakouts - the latter providing a means to do 'hacks' but at least in the direction that is manageable.
  10. Re: Rubbish Rubbish[ Go to top ]

    But it is not *only* about protecting users of your API. Just as important is that you should not expose things you don't mean to expose/ don't understand yet. Deliberately not opening up too much is a good way to 'force' discussion on extension points, so that both users and developers can look for the best solution.
    Interesting points. If you trusted the clients of your API you may get this same protection from coding conventions. Possibly packaging your public API classes/interfaces separate from your implementation. It would be understood that use of any behind-the-api classes would not be supported beyond the current release. George Coller DevilElephant
  11. Re: Rubbish Rubbish[ Go to top ]

    But it is not *only* about protecting users of your API. Just as important is that you should not expose things you don't mean to expose/ don't understand yet. Deliberately not opening up too much is a good way to 'force' discussion on extension points, so that both users and developers can look for the best solution.


    Interesting points. If you trusted the clients of your API you may get this same protection from coding conventions. Possibly packaging your public API classes/interfaces separate from your implementation. It would be understood that use of any behind-the-api classes would not be supported beyond the current release.

    George Coller
    DevilElephant
    It's as much about trusting ourselves as trusting users. Again, I point at the kind of code I've seen from Eclipse plugins - and I've checked out quite a few - internal classes (with big fat warnings in their headers NOT to extend them) are used by most of the plugins I've seen, without any notion of why. You know, if textual contracts and coding standards worked that well, why do we need strong typing, private variables etc at all? Almost anything can be solved by adhering to rules. The interesting thing about strong typing, final etc is that they can be enforced. If you think trying to enforce something equals being scared, not trusting your users and I don't know what else, there's really not much to say for me anymore.
  12. Re: Rubbish Rubbish[ Go to top ]

    ...Java sneeze-guards...
    LOL! I'm gonna have to steal that one! :) Dave Rooney Mayford Technologies
  13. Final Class is Limitation[ Go to top ]

    Good response. Final class is limitation. If you have problems with developers extending classes improperly then look at what is the real problem? Can it be that it would be better to work on actual problem then to limit the design choices? Denis Krukovsky http://talkinghub.com/
  14. Good response. Final class is limitation.

    If you have problems with developers extending classes improperly then look at what is the real problem? Can it be that it would be better to work on actual problem then to limit the design choices?
    If you know what the actual problem is, yes. One of the reasons for not opening up your API from the start is that you want to find out where extensions points (or problems) are to be solved. It's another practical thing: there is no developer in this world that writes the perfect API from scratch. So defenisive programming using final etc is really about setting out a path for change and urging your users to participate in looking for solutions that fit everyone.
  15. Re: Rubbish[ Go to top ]

    I agree with Simon.

    I'm sure final and sealed may feel awkward to anyone coming from a Smalltalk / Ruby / PHP background;
    Actually, my first OO language was C++.
    Michael Feathers' primary argument for ditching the final keyword is that it somehow complicates unit testing. Sadly, he fails to give any example of a piece of Java code that's untestable because of the final modifier and to be honest I can't think of any (valid) piece where this is the case.
    Here's the rub. I'm not talking about testing an API but rather testing code that uses an API. Here's something to try: Write a list server using the JavaMail API but without actually exercising the mail sending and recieving code. That is, write unit tests for your code. If you try this you'll find that final and static pretty much oblige you to send and recieve mail when you write classes that use the API. Now, if you want to test those classes independently, the only thing you can do is wrap or use a tool that strips final. Some people would come back and say "well, it's easy to mock up a simple mail server to test this code." Well, maybe. But what else will you mock up to test your code? A document management system? A grid computing environment? Not for unit tests.
    Until I've seen a convincing example of the contrary, I'm inclined to say that a class untestable due to the final keyword, is the result of a poorly designed class, not of a poorly designed keyword.

    Rather than to change the programming language, it's the class that should be changed. Like Simon said, subclassing something to test it is a poor use of inheritance.
    There are a lot of poorly designed classes out there. Saying they are poorly designed doesn't make them better. We need to work in a way in which recovery is easier, because there are a lot of people out there who are stuck.
  16. Re: Rubbish[ Go to top ]

    Here's something to try: Write a list server using the JavaMail API but without actually exercising the mail sending and recieving code. That is, write unit tests for your code. If you try this you'll find that final and static pretty much oblige you to send and recieve mail when you write classes that use the API. Now, if you want to test those classes independently, the only thing you can do is wrap or use a tool that strips final.
    Michael, From my point of view this is a particular situation, and i would solve it by hacking the code one way or another to remove the final attibute and subclass, is that such a pain ? I sometimes write some libraries for internal use, and i do not want them to be touched, as i know by experience that some suppsedly clever developer will find a way to break them by doing weird stuff "for optimisation" and then pretend it is a bug in the library. If i could seal the whole library i would do it for sure :o) Cheers, Christian
  17. Re: Rubbish[ Go to top ]

    Since we are playing around with class declarations, how about one for singletons? For example:
    public single class ASingleTon { public ASingleTon(){} }
    By doing this, all requests for "new" return the same instance of the class -or- creates one if none exist in memory. The constructor(s) become more of a way to change state. That will save this silliness:
    public class ASingleTon { private static ASingleTon _this; private ASingleTon(){} public static ASingleTon getInstance() { if (_this == null) _this = new ASingleTon(); return _this; } }
    (I freehanded the above so don't complain if i missed something)
  18. Re: Rubbish[ Go to top ]

    Since we are playing around with class declarations, how about one for singletons?
    hasn't this "pattern" been deprecated by the entire OO community yet? it's the most overused and abused pattern out there. when i encounter this pattern, typically, it is used in a class that only contains static type methods (dao, get data from somewhere such as DAOClass.getInstance().putStuff( stuff ) , put data somewhere, etc). there's no instance level state involved with the functionality provided by the class, but yet, the static method is avoided, and a cumbersome singleton pattern has been implemented.
  19. Re: Rubbish[ Go to top ]

    Since we are playing around with class declarations, how about one for singletons?


    hasn't this "pattern" been deprecated by the entire OO community yet? it's the most overused and abused pattern out there.
    It was one pattern that was almost 'voted off the island' in a meeting run by one of the GoF a while ago. I singleton instances are very useful but I never make the 'singleness' of my class part of the public API. I see the singleton as a special case of a simple factory method and design code and document them as such.
  20. Re: Rubbish[ Go to top ]

    I singleton instances are very useful but I never make the 'singleness' of my class part of the public API.
    Exactly. Of course there's the concept of singleton in any application, but neither the class itself should know ("I am a singleton"), nor clients ("this is a singleton"). It's a third party ("the application") that sets up a singleton instance and makes sure the clients see it. That's the whole point of DI. Singleton-ness is always relative to some context; as soon as you want to run two "applications" in, say, one web-app, you'll notice.
  21. Re: Rubbish[ Go to top ]

    I singleton instances are very useful but I never make the 'singleness' of my class part of the public API.


    Exactly. Of course there's the concept of singleton in any application, but neither the class itself should know ("I am a singleton"), nor clients ("this is a singleton"). It's a third party ("the application") that sets up a singleton instance and makes sure the clients see it. That's the whole point of DI. Singleton-ness is always relative to some context; as soon as you want to run two "applications" in, say, one web-app, you'll notice.
    So if you are writing a class, you wouldn't want to care whether it's instance variables would be shared amongst clients? You just pointed out a perfect example where a common application of DI stinks.
  22. Re: Rubbish[ Go to top ]

    So if you are writing a class, you wouldn't want to care whether it's instance variables would be shared amongst clients?

    Why would I? It's up to the application developer to decide. As a class developer I can suggest that it be sensible, more performant or even vital to the correctness of a certain application that all clients share a particular instance but again, always relative to a usage context. Give me an example where a class "must be" a singleton and I'll take it to an ASP scenario where you will want to employ multiple instances and you'll have to resort to classloader/process isolation in order.

    And you can't enforce singletonness anyway. Just run a second VM -say, in a clustered webcontainer- and all your garantuees will fail.

  23. Clustered singleton[ Go to top ]

    And you can't enforce singletonness anyway. Just run a second VM -say, in a clustered webcontainer- and all your garantuees will fail.
    How about a clustered singleton .. http://forums.tangosol.com/thread.jspa?forumID=6&threadID=82 ;-) Peace, Cameron Purdy Tangosol Coherence: The Java Data Grid
  24. Re: Rubbish[ Go to top ]

    hasn't this "pattern" been deprecated by the entire OO community yet?
    No it has not been deprecated. It's a very basic part of OO design.
    it's the most overused and abused pattern out there.
    Singletons are "overused" due to being required. They are abused usually due to the lack in the java language for properly handling the convention. I have seen plenty of implementations trying to cope with this in java with the idea of factories. But factories, if you think in the real world, are used to produce copies of an original. And I have seen plenty of factories that should be singletons themselves. From what I can tell, creating this in the language would clean up a lot of hacked code and bad practices.
  25. Singletons[ Go to top ]

    Singleton are not a basic part of OO design. It's a piece of procedural style inside OO design: http://www.cabochon.com/~stevey/blog-rants/singleton-stupid.html Denis Krukovsky http://talkinghub.com/
  26. Re: Singletons[ Go to top ]

    Singleton are not a basic part of OO design. It's a piece of procedural style inside OO design:
    Read that aloud and few times and see if you notice any redundancy. Okay, gave it some more thought. I think introducing "single" would be bad form so instead I think "static" looks better:
    public static class ASingleton {     public ASingleton(){} }
  27. Re: Singletons[ Go to top ]

    Still even more thought on this. Constructors for singletons seem wasteful. Instead probably use the static initializer block for the class. All fields and methods of singletons could be considered static (even if they are not) and referenced as such. Privately scoped fields, methods, and "this" references would not be possible.
    public static class ASingleton {     public int a;     static     {         // This is a singleton "constructor"         ASingleton.a = 2;     }     public void testMethod(){} }
    Referencing this singleton by other classes would be really simple:
    ASingleton.a = 5; // Set singleton value ASingleton.testMethod; // Run test method of singleton
  28. Re: Singletons[ Go to top ]

    The rant you linked to can be summarized as follows: I was told that singletons were bad, therefore anyone who uses singletons are stupid. Paraphrasing directly from the source: "In my [Design Patterns] [study], when we got to [Singleton], [I read]: [Singletons] are EVIL ...and that's all we had to learn about them. To this day, I have no idea [what a Singleton is good for.] But if [someone told me something], it's OK with me. " The poor fool considers himself an enlightend guru because of this attitude and states: "I actually use [whether a developer uses Singletons] as a weeder question now. If they claim expertise at Design Patterns, and they can ONLY name the Singleton pattern, then they will ONLY work at some other company."
  29. Re: Recycling[ Go to top ]

    Controlling inheritance (as well as other forms of dependency) is a good idea. However, this could best be done outside a class or even a package by controlling what can or cannot be imported into a class. For example, it may be perfectly fine for class B to extend class A, but we may decide that class C (or, for example, any class from package P) should not have that privilege. Clearly, the author of class A cannot make that decision, since they don't know yet that classes B or C exist yet, nor what they should be used for. Someone responsible for the architecture of a system may want to have that kind of control, but currently has no way of enforcing it. "Final" is not the ultimate solution, but rather a first prototype of how we want to control the structural integrity of our systems.
  30. Re: Rubbish[ Go to top ]

    Let me tell you, in complete earnestness, that it doesn't appear to be working.
  31. Re: Rubbish[ Go to top ]

    Utter garbage. Final stops people creating deep nested class hierarchies that are a bad smell. The key to reuse is NOT inheritance - heritance has a place but too many people use it badly. Final provides a useful way to prevent people writing incomprehensible and hard to maintain code.
    Let me say, in all earnestness, that it doesn't appear to be working.
  32. I like it ...[ Go to top ]

    i've never used the final-keyword in the past. but since a year or two i use it massivly. in most cases i use this keyword to define some variables final. for example, if i want to forbid to reinitialize a variable or an objec-attribute. it's a nice feature, and it helps to write a more clear and readable code. it is also reasonable in some cases to forbid inheritance or to rewrite some methods. a good-example is the template-method (fowler). so, i belive in the power of the final-keyword. ;) Andrej
  33. To be clear: are we talking about "final" the keyword or "final classes"? The blog post seems to point to the latter but is not explicit. Final variables can be invaluable and give developers a tiny bit of sanity control over the question "did the value or the object to which the variable point change?". Sure, a true "const" would be even better or true ACLs (with "const" abilities) would be best, but since it's all we've got, it can be very useful in cases such as class invariants.
  34. I suspect that stereo manufacturers don't weld the units just in case THEY have to fix it and actually they do often fit some kind of anti-tamper device. If you "send" my code back I can still fix it because I have the source and you can't tamper with it; okay, you might use JAD, but hey? So, if it's hard to mock a framework, because it's badly designed, then best to vote with your feet. Actually, it may be better if "final" was the default and unfinal introduced. Then "final" could be deprecated!
  35. Self Documenting Code[ Go to top ]

    Using final is considered a good programming practice, I happen to be one who uses final regularly. Take for example a method that has an object argument. If you don't want the method's body to be able to change the reference of the passed in argument, you absolutely should set the argument to final. Making local variables final is another good example, when you don't want code after the instance is created to be able to change the reference for the variable. Writing what you mean goes a long way towards more understandable code.
  36. re: Self Documenting Code[ Go to top ]

    Using final is considered a good programming practice...
    By some, sure. However, it sucks as an O-O practice, and there are ways around it. Read on...
    Take for example a method that has an object argument. If you don't want the method's body to be able to change the reference of the passed in argument, you absolutely should set the argument to final.
    Why? If the code isn't supposed to change the value of an argument, then it shouldn't. That's Design by Contract. There should also be tests for the method to ensure that the Contract is honoured. There shouldn't have to be another layer of protection from bad programming practices. To me, 'sealed' and 'final' are crutches.
    Making local variables final is another good example, when you don't want code after the instance is created to be able to change the reference for the variable.
    If a method is so long that you can't see where a variable is being used, then it's too long. Dave Rooney Mayford Technologies
  37. If the code isn't supposed to change the value of an argument, then it shouldn't. That's Design by Contract. There should also be tests for the method to ensure that the Contract is honoured.
    Unfortunately, that doesn't work well in practice. The contract has to be read - e.g. from JavaDocs or some document. Reading API documentation is probably skipped by 80% of developers, and even if they read them, there is no guarantee they will understand/ adhere to them. It's funny that the contracts of the Eclipse API are being mentioned as a good alternative. I wish they would use final more actually. Currently you can see that a lot of plugins use internal classes etc. Probably because the authors didn't read the contracts/ couldn't find an easier way to do things/ simply didn't care. This makes those plugins like time bombs. Any new Eclipse version potentially breaks plugins - which in fact is what happens a lot. Add to that that Eclipse uses a lot of introspection etc, so many times the only way to find out that a plugin is broken is to actually run it. 'But they should have been properly unit tested'. Agreed, but many of them arent. That's the real world. And in dealing with real-world problems like this, the final keyword has an exellent case. -- Eelco
  38. re: Self Documenting Code[ Go to top ]

    'But they should have been properly unit tested'. Agreed, but many of them arent. That's the real world.
    Maybe your real world, but not mine. If you write the code to be tested (or better yet, use test-driven development), then the classes and methods become very small and focused on what they're supposed to do, rather than multi-hundred line monstrosities. When you use proper OO principles, concepts such as final aren't required to 'save' developers from themselves. So rather than focusing on the virtues and vagaries of final or sealed, perhaps we should focus on teaching developers something more fundamental like the Single Responsibility Principle? I can buy the argument that it provides improved performance owing to compiler optimizations, but how many applications really and truly need it? Dave Rooney Mayford Technologies
  39. Maybe your real world, but not mine.
    Well, are you using Eclipse plugins? Are you using any open source projects? 100% coverage is very rare, and most are hardly covered at all.
  40. Eclipse internals[ Go to top ]

    One reason why so many Eclipse plugin developers rely on internal Eclipse packages is because there is code in there that should be exposed, but is not. You cannot extend JavaProjectWizard or JavaApplicationLaunchShortcut, for instance. So people end up copying the code they need just so they don't use an internal API. Now they have the other problem of diversion from the original code. I've done it and so have others.
  41. Re: Eclipse internals[ Go to top ]

    One reason why so many Eclipse plugin developers rely on internal Eclipse packages is because there is code in there that should be exposed, but is not.

    You cannot extend JavaProjectWizard or JavaApplicationLaunchShortcut, for instance.

    So people end up copying the code they need just so they don't use an internal API. Now they have the other problem of diversion from the original code. I've done it and so have others.
    Agreed. Using final a lot has a very big IF imo: the maintainers of the API have to be serious about fixing issues that arise. With Wicket we usually respond to requests like that within a few days, typically by pointing out a better solution (I'd say 70% of cases) removing final (20%) or making broader improvements (usually because of something we completely missed and now learned about because of the request).
  42. It's funny that the contracts of the Eclipse API are being mentioned as a good alternative. I wish they would use final more actually. Currently you can see that a lot of plugins use internal classes etc. Probably because the authors didn't read the contracts/ couldn't find an easier way to do things/ simply didn't care. This makes those plugins like time bombs. Any new Eclipse version potentially breaks plugins - which in fact is what happens a lot. Add to that that Eclipse uses a lot of introspection etc, so many times the only way to find out that a plugin is broken is to actually run it.

    'But they should have been properly unit tested'. Agreed, but many of them arent. That's the real world. And in dealing with real-world problems like this, the final keyword has an exellent case.
    I know. It's a pain, but I think it beats the alternative. At least when a plugin breaks you know that you can trust the plugin developer a little less. They were warned but they used the internal API anyway. On the other hand, a responsible plugin developer could use an internal API only for testing, so that there is no breakage in production code, and they can end up producing higher quality software for you.. something that they might not be able to do without the internal API. People will be stupid, and to me that's okay as long as we get to recognize it and react accordingly.
  43. At least when a plugin breaks you know that you can trust the plugin developer a little less. They were warned but they used the internal API anyway. On the other hand, a responsible plugin developer could use an internal API only for testing, so that there is no breakage in production code, and they can end up producing higher quality software for you..
    It's not about programmers being bad or good kids. I believe if you protect an API better, e.g. using final and private more (protected variables is another thing that is used in the Eclipse API a lot), people would be forced into discussion more and the API developers might get better ideas on what they missed. People would still hack around, but hopefully less.
  44. Re: re: Self Documenting Code[ Go to top ]

    There shouldn't have to be another layer of protection from bad programming practices. To me, 'sealed' and 'final' are crutches.
    Do you consider having private/protected/public methods to be a bad thing as well? (The ways in which a class can be extended is surely part of its public interface). Also, there are many constructs in java (and most modern languages) which serves to protect the programmers from making mistakes... do you feel that all those are unnecessary using the same reasoning...? I mean: If you never make any mistakes, then you need no visibility modifiers, no declaration of variables prior to use, no constants, etc ... DBC is good, short methods are (generally) to be preferred. But I also like the idea of catching errors before even compiling my code, don't you?
  45. Final Params[ Go to top ]

    No that is not design by contract. Design by contract is all about ensuring the values you are passed are legitimate for their use in the method. Their use may only be to be read in which case final not only indicates this but prevents it. to me it would be equally useful to have a way of indicating it was read-only but preventing it being changed stops all those hackers (of whom there are many) altering something they're not supposed to.
  46. It's clear when you read what he is saying that he's referring to final classes. final variables are great and I final in decaring class and member variables [i]whenever[/i] possible. The main benefit being that the compiler will complain if not every initialization path ensures the variables are initialized properly. But final classes (and methods) are something else entirely. Personally, I rarely use them unless it would be really bad for someone to change the class or method. I don't agree that final prevents deep hierarchies. Good design does but if someone has access to your source, they can remove the final and if they don't, who are you assume that no one will ever need to extend your class? Extension is definitely overused but can be a real get-out-of-jail-free-card when you need it.
  47. Re: Self Documenting Code[ Go to top ]

    Take for example a method that has an object argument. If you don't want the method's body to be able to change the reference of the passed in argument, you absolutely should set the argument to final.
    That seems rather pointless, at least for the reason you are giving. All parameters are passed by value in Java. Whether the parameter is final has no effect on the caller. I actually think that all method arguments should be final by default but it has nothing to do with the caller. It helps eliminate some common coding errors and when people reassign parameter to different values, it's often confusing (although I think it sometimes has it's place.)
  48. Re: Self Documenting Code[ Go to top ]

    Take for example a method that has an object argument. If you don't want the method's body to be able to change the reference of the passed in argument, you absolutely should set the argument to final.


    That seems rather pointless, at least for the reason you are giving. All parameters are passed by value in Java.
    Ahem, no java objects are passed by reference in java. YOu cannot change what the reference points to, but you can change the contents of the objects using the getters and setters. Dear me!
  49. Re: Self Documenting Code[ Go to top ]

    Take for example a method that has an object argument. If you don't want the method's body to be able to change the reference of the passed in argument, you absolutely should set the argument to final.


    That seems rather pointless, at least for the reason you are giving. All parameters are passed by value in Java.


    Ahem, no java objects are passed by reference in java. YOu cannot change what the reference points to, but you can change the contents of the objects using the getters and setters. Dear me!
    You might want to brush up on the basics. Java passes all parameters by value. best in section 2.6.1: "There is exactly one parameter passing mode in Java -- pass by value -- and that helps keep things simple." Ken Arnold and James Gosling, section 2.6.1 The Java Programming Language, Second Edition http://www.javaworld.com/javaworld/javaqa/2000-05/03-qa-0526-pass.html http://www-128.ibm.com/developerworks/java/library/j-passbyval/
  50. Re: Self Documenting Code[ Go to top ]

    Take for example a method that has an object argument. If you don't want the method's body to be able to change the reference of the passed in argument, you absolutely should set the argument to final.


    That seems rather pointless, at least for the reason you are giving. All parameters are passed by value in Java.


    Ahem, no java objects are passed by reference in java. YOu cannot change what the reference points to, but you can change the contents of the objects using the getters and setters. Dear me!


    You might want to brush up on the basics. Java passes all parameters by value.

    best in section 2.6.1: "There is exactly one parameter passing mode in Java -- pass by value -- and that helps keep things simple."

    Ken Arnold and James Gosling,
    section 2.6.1 The Java Programming Language, Second Edition

    http://www.javaworld.com/javaworld/javaqa/2000-05/03-qa-0526-pass.html

    http://www-128.ibm.com/developerworks/java/library/j-passbyval/

    The object reference is passed by value, so point taken, however, I do believe that the statement was misleading and quite clearly from the article you quote (IBM) it is an ambiguous statement. Seems to invite argument really. Of course I am well aware that getters and setters are not the only way to change state, in fact some argue in PURE OO you shouldnt have getters and setters at all. What was the original point? I forgot. Oh yes, final on parameters does not equal const... Quite correct, well done!
  51. Re: Self Documenting Code[ Go to top ]

    object reference is passed by value, so point taken, however, I do believe that the statement was misleading and quite clearly from the article you quote (IBM) it is an ambiguous statement.
    It's not ambiguous. I think you can definitely make a case that Objects are passed by reference in Java. But I didn't mention anything about Objects. I stated that all parameters in Java are passed by value which is a completely accurate and incontrovertible statement.
    Seems to invite argument really.

    Of course I am well aware that getters and setters are not the only way to change state, in fact some argue in PURE OO you shouldnt have getters and setters at all.

    What was the original point? I forgot.

    Oh yes, final on parameters does not equal const... Quite correct, well done!
    The point was that people keep posting that final is important for parameter passing to prevent people from changing the passed in value, implying that it could affect the caller. Obviously, final doesn't make a lick of difference in that regard. I think all parameters in Java should be final by default but it has nothing to do with protecting the caller.
  52. Re: Self Documenting Code[ Go to top ]

    Take for example a method that has an object argument. If you don't want the method's body to be able to change the reference of the passed in argument, you absolutely should set the argument to final.


    That seems rather pointless, at least for the reason you are giving. All parameters are passed by value in Java.


    Ahem, no java objects are passed by reference in java. YOu cannot change what the reference points to, but you can change the contents of the objects using the getters and setters. Dear me!
    A couple more things: You don't need getters and setters to change an Object's state. Whether you can change the state of an Object passed to a method has nothing to do with my point (it actually is part if my point) because final doesn't prevent that.
  53. Re: Self Documenting Code[ Go to top ]

    Take for example a method that has an object argument. If you don't want the method's body to be able to change the reference of the passed in argument, you absolutely should set the argument to final.
    That seems rather pointless, at least for the reason you are giving. All parameters are passed by value in Java.
    Ahem, no java objects are passed by reference in java. YOu cannot change what the reference points to, but you can change the contents of the objects using the getters and setters. Dear me!
    The references are passed by value. ;-) Peace, Cameron Purdy Tangosol Coherence: Clustered Shared Memory for Java
  54. I believe we call this "unclear on the concept." The only way you could convince me to get rid of the final keyword would be to make it the default for /everything/ and have non-final entities require explicit "extensible", "override" and "variable" keywords. Personally, I think that would be a step in the right direction because classes, methods, fields and even local variables should be final whenever they can be. Forcing explicit design decisions by making classes and methods final and then having to think about removing finality in each case is good for the evolution of a framework in particular. This is our general pattern on Wicket, and I think it is a good one.
  55. BTW, we often find that someone wanting to remove final from something wants to extend something in a way we didn't intend or don't want to support or where we want do design a different kind of support. Each time that happens, we've actually saved ourselves, that person (in spite of what they might think) and a bunch of people like them loads of time and frustration, because we're almost certainly going to end up breaking their code in the future anyway and sometimes in ways that will be difficult to work around.
  56. BTW, we often find that someone wanting to remove final from something wants to extend something in a way we didn't intend or don't want to support or where we want do design a different kind of support. Each time that happens, we've actually saved ourselves, that person (in spite of what they might think) and a bunch of people like them loads of time and frustration, because we're almost certainly going to end up breaking their code in the future anyway and sometimes in ways that will be difficult to work around.
    I'm going to assume this was in respones to what I posted above. I painted my comment with a overly-broad brush. If you have a clear plan and a really good reason to make something final, I completely agree with doing it. What I am against is making classes final for no specific reason or because you 'don't want deep heirarchies' or because you think it will improve performance (no, I didn't make that one up.)
  57. I believe we call this "unclear on the concept."

    The only way you could convince me to get rid of the final keyword would be to make it the default for /everything/ and have non-final entities require explicit "extensible", "override" and "variable" keywords. Personally, I think that would be a step in the right direction because classes, methods, fields and even local variables should be final whenever they can be.

    Forcing explicit design decisions by making classes and methods final and then having to think about removing finality in each case is good for the evolution of a framework in particular. This is our general pattern on Wicket, and I think it is a good one.
    While we are at it lets make private the default and add an explicit package access modifier.
  58. The only way you could convince me to get rid of the final keyword would be to make it the default for /everything/ and have non-final entities require explicit "extensible", "override" and "variable" keywords.
    While we are at it lets make private the default and add an explicit package access modifier.
    Can you imagine if Sun took us up on these changes? Talk about your JDK migration effort. In use I don't think there is much to the use/don't use the class-level final keyword argument. If you want to use it, cool. It won't really have much of an effect either way. As a client of many APIs (aren't we all) I don't remember ever needing to extend a third-party class unless it was specifically meant to be extended (I'm thinking base classes and abstract classes). I doubt that most of the third-party Java code we use today uses the class-level final consistently, if at all. (Wicket aside of course). _____________ George Coller DevilElephant
  59. The only way you could convince me to get rid of the final keyword would be to make it the default for /everything/ and have non-final entities require explicit "extensible", "override" and "variable" keywords.


    While we are at it lets make private the default and add an explicit package access modifier.


    Can you imagine if Sun took us up on these changes? Talk about your JDK migration effort.
    You could automate the whole thing. You just parse the code and swap out keywords (or lack thereof). They could provide such a tool to anyone migrating the API. Not that I think it'll happen.
  60. Can you imagine if Sun took us up on these changes? Talk about your JDK migration effort.
    I would guess there wouldn't be much to change. Probably just make the interpreter and compiler ignore the final for class declarations. All new and old code should work fine. Of course, I don't think you have much to fear since Sun never changes the language based on popular demand (which is the main reason Java hasn't evolved much over the years and newer java-ish languages are "eating it's lunch" and people are defecting).
  61. Java hasn't evolved much over the years and newer java-ish languages are "eating it's lunch" and people are defecting
    "The sky is falling. News at 11." Java is a platform and a language. Newer, cooler stuff will come along. People will eventually switch to some of those newer, cooler things. It's not "eating lunch", it's just forward progress of an industry. Our responsibility in IT is to make good investments (ones that return good net benefit), and to find ways to leverage those investments as new ideas come along. Whether one likes it or not, Java is not going anywhere for the next twenty years. However, ten years from now, very little new investment will be being made into Java -- at least not Java as we know it today. Maybe Sun (and the rest of the Java marketplace) find ways to keep Java on or close to the cutting edge. There's a good chance that will happen. If it doesn't, something will eventually take its place. Peace, Cameron Purdy Tangosol Coherence: The Java Data Grid
  62. I doubt that most of the third-party Java code we use today uses the class-level final consistently, if at all. (Wicket aside of course).
    Oh, there are a few, most notably the JDK API itself. What I don't understand in this whole discussion is why people feel final is so very different from other defensive mechanisms we have in Java. It's one of the tools we have, and saying we should get rid of it strikes me as a similar remark as getting rid of the private modifier.
  63. I believe we call this "unclear on the concept."

    The only way you could convince me to get rid of the final keyword would be to make it the default for /everything/ and have non-final entities require explicit "extensible", "override" and "variable" keywords. Personally, I think that would be a step in the right direction because classes, methods, fields and even local variables should be final whenever they can be.

    Forcing explicit design decisions by making classes and methods final and then having to think about removing finality in each case is good for the evolution of a framework in particular. This is our general pattern on Wicket, and I think it is a good one.
    That sounds great, except for one thing. We're human, and sometimes we make mistakes. If you don't leave people an "out" they make their own "out" and the end result can be worse code. I see it in the field all the time. Frankly, I think final can be used well but only if framework developers start to do something that really haven't so far: they have to recognize that it's not enough to write unit tests for their framework, they have to write unit tests for code that -uses- their framework. That's when you encounter the mocking problem, and that's when you can surmount it by introducing interfaces, API that don't rely on static methods, etc. If they followed that simple rule, the problem would go away (except when they make mistakes).
  64. I believe we call this "unclear on the concept."

    The only way you could convince me to get rid of the final keyword would be to make it the default for /everything/ and have non-final entities require explicit "extensible", "override" and "variable" keywords. Personally, I think that would be a step in the right direction because classes, methods, fields and even local variables should be final whenever they can be.

    Forcing explicit design decisions by making classes and methods final and then having to think about removing finality in each case is good for the evolution of a framework in particular. This is our general pattern on Wicket, and I think it is a good one.
    Wow. Do you write production code? I'm imaging a conversation such as the following: "Hi I need to do X to solve a customer problem, I'd like to use this class you designed but I can't because you made this method I need to override final" Your answer: "I made it final because you shouldn't override it" "Uh, okay, well your class doesn't do what I need. Can you tell me how I can use your class to do X" "You shouldn't do X, my class does Y. You should do Y". "Uh, well I'm really just trying to solve my customers problem and they clearly need X. Not Y." "We decided a long time ago we don't want to do X. It makes my life much cleaner if you only to Y. So explain to the customer that Y is what you can do for him". ... How to extend and how to use a class is the purpose of well written javadocs and should be covered in your test cases. Crippling your code through extensive use of final classes and methods (and over use of private methods for that matter) is a good way to limit its real world use. If you are writing frameworks you goal should be to slip quietly in the background allowing the application architect to tune the solution to a business problem, not attempt to dictate a solution for problems you couldn't possibly have full requirements for. Let me guess, you've got lots of problems with applications groups using cut-n-paste techniques with your framework code? Guess what... you're causing it. Even after your enlighting discusions explaining how you are smarter than them, the application engineer just needs to get back to solving problems not wrestling with your inflexible framework, so he probably will end up fixing your code for you... in the form of a modified library or cut-n-paste modification.
  65. How to extend and how to use a class is the purpose of well written javadocs and should be covered in your test cases.

    Crippling your code through extensive use of final classes and methods (and over use of private methods for that matter) is a good way to limit its real world use.

    If you are writing frameworks you goal should be to slip quietly in the background allowing the application architect to tune the solution to a business problem, not attempt to dictate a solution for problems you couldn't possibly have full requirements for.

    Let me guess, you've got lots of problems with applications groups using cut-n-paste techniques with your framework code? Guess what... you're causing it. Even after your enlighting discusions explaining how you are smarter than them, the application engineer just needs to get back to solving problems not wrestling with your inflexible framework, so he probably will end up fixing your code for you... in the form of a modified library or cut-n-paste modification.
    If you try to make your framework to be flexible for every thinkable extension possible, your work will have no end. In many cases, we have to make some assumptions about our framework code, and must have tools to enforce those assumptions. It's something like this: I can implement this functionality using internal aspects of this class, and make it 'final' to guarantee it will work properly. It's ok, because it's all in the same class, and I'm not exposing the internals, and it will be a 4-simple-and-clean-code-line method, taking 1 hour to implement and test. But, I can also implement this method to allow flexibility, but I'll need to externalize some functionality to new classes, create three interfaces, make my algorithm more generic, raise the overall complexity of the framework, and it will take one week to implement and test. Having an API 'too extensible' makes difficult to change things internally without breaking client code, because you never know how others will do it, and cannot enforce the 'right way' to do this. Well, you can write a good tutorial, but it will never have the same effect as a compilation error :) Making every method final by default may not be the best solution, but not having mechanisms to enforce some contracts (public, protected, final) is death.


  66. If you try to make your framework to be flexible for every thinkable extension possible, your work will have no end. In many cases, we have to make some assumptions about our framework code, and must have tools to enforce those assumptions.

    It's something like this: I can implement this functionality using internal aspects of this class, and make it 'final' to guarantee it will work properly. It's ok, because it's all in the same class, and I'm not exposing the internals, and it will be a 4-simple-and-clean-code-line method, taking 1 hour to implement and test. But, I can also implement this method to allow flexibility, but I'll need to externalize some functionality to new classes, create three interfaces, make my algorithm more generic, raise the overall complexity of the framework, and it will take one week to implement and test.

    Having an API 'too extensible' makes difficult to change things internally without breaking client code, because you never know how others will do it, and cannot enforce the 'right way' to do this. Well, you can write a good tutorial, but it will never have the same effect as a compilation error :)

    Making every method final by default may not be the best solution, but not having mechanisms to enforce some contracts (public, protected, final) is death.
    Ronald, I think I in general agree with your sentiment. But I would point out that having Professional Services teams make cut-n-paste copies of your code so they can access what they want isn't upgradeable either and the reality is they will do this if you are inflexible in providing support for them. You do need to make assumptions when designing a framework, and those should be very well documented and encouraged, my point was simply that a heavy handed technique such as final on classes/methods and in some cases private methods or private fields with no alternate accessors gives your user little options if you didn't foresee a problem or if the problem decomposes differently than you had assumed. I think the Eclipse Project has done a great job of this in their framework. Most classes clearly state something to the effect of "This class is not intended to be subclassed, use implementations of XXXHandler and YYYAdaptor to extended functionality" But they don't actually make their classes final. Usually what happens is you assume you will subclass it, then read the comment and then use composition. But what-if you ran into a case where XXXHandler and YYYAdaptor didn't address your needs. The user is just flat out of luck? Why would you prefer a hard language limitation to a reasonable 'soft' enforcement like the one used by Eclipse? Bryant
  67. Wow. Do you write production code? I'm imaging a conversation such as the following: "Hi I need to do X to solve a customer problem, I'd like to use this class you designed but I can't because you made this method I need to override final" Your answer: "I made it final because you shouldn't override it" "Uh, okay, well your class doesn't do what I need. Can you tell me how I can use your class to do X" "You shouldn't do X, my class does Y. You should do Y". "Uh, well I'm really just trying to solve my customers problem and they clearly need X. Not Y." "We decided a long time ago we don't want to do X. It makes my life much cleaner if you only to Y. So explain to the customer that Y is what you can do for him".
    I think you just proved the need for final: To prevent subclassing as the most expediant means of from extracting functionality from a class that it doesn't/shouldn't provide. If the developer had put "don't subclass this class" or "don't subclass this class in order to do X" in the JavaDocs you obviously would have ignored it. Hence why the compiler must come to the rescue.
  68. Look at where C# went[ Go to top ]

    The article "Versioning, Virtual, and Override" on Artima discusses the design choices in C# regarding virtual methods (therefore implicitly 'method final'). http://www.artima.com/intv/nonvirtual.html Personally, I'm strongly in the final is good camp. The key point I see being expressed against it is that when poor code happens to incorporate 'final', testing becomes that much more difficult. I think final is valuable enough to keep. If you *really* need to extend such classes for testing, you could modify the bytecode.
  69. Re: Look at where C# went[ Go to top ]

    I'm strongly in the final is good camp.
    The funny thing is that most people (myself included) were in the "final is good" camp until they were subjected to code from people in the "final is good" camp ;-) Peace, Cameron Purdy Tangosol Coherence: The Java Data Grid
  70. Re: Look at where C# went[ Go to top ]

    I'm strongly in the final is good camp.


    The funny thing is that most people (myself included) were in the "final is good" camp until they were subjected to code from people in the "final is good" camp ;-)

    Peace,

    Cameron Purdy
    Tangosol Coherence: The Java Data Grid
    I've been banging my head against the wall when subjected to other peoples' code (or my own older code for that matter) on many occasions, the final keyword being involved or not. Cursed a couple of times because of final. But just as many times or more because of APIs that seem to consist solely of interfaces with whole books of JavaDoc to read through to get any idea of the 'contract', giving me no guaranteed behaviour, having to step through a zillion of proxies when debugging and not even remotely giving me an idea what the best extensions points are just by looking at some plain code (good consulting/ book selling strategy though). ;) -- Eelco
  71. Final is good[ Go to top ]

    Maybe one can think that, while interfaces define the API contract(parameters, return values, exceptions), the final class modifier guarantees behavior. An example would be in the case of plugins. If a plugin is written to perform a certain calculation in a certain way, then an important part of the "contract" here is that its implementation is well-defined. If such a plugin is made extensible, then the framework using the plugin cannot guarantee that its implementation is correct.
  72. Changing code so you can test it[ Go to top ]

    Every time I change some code (or change the way I write it in the first place) so that it is easier to test but harder to understand I cringe. I don't like writing it "wrong" so it can be tested. I'm not talking about something like IOC that provides other benefits AND makes testing easier. I'm talking about making a method that should be private so it's non-private just so I can write a test that makes sure that code works. Sometimes this is just a smell and it indicates something rotten elsewhere in the design. Sometimes, even then, its not a tradeoff I am willing to make to refactor or redesign the rottenness out. How this applies: It makes me nervous to read that we should get rid of finals because of the testing consequences. (hmmm ... that's a nice pun)
  73. Every time I change some code (or change the way I write it in the first place) so that it is easier to test but harder to understand I cringe. I don't like writing it "wrong" so it can be tested.

    I'm not talking about something like IOC that provides other benefits AND makes testing easier. I'm talking about making a method that should be private so it's non-private just so I can write a test that makes sure that code works.

    Sometimes this is just a smell and it indicates something rotten elsewhere in the design. Sometimes, even then, its not a tradeoff I am willing to make to refactor or redesign the rottenness out.

    How this applies:

    It makes me nervous to read that we should get rid of finals because of the testing consequences.

    (hmmm ... that's a nice pun)
    It's funny. There are some people who dislike IOC because they think it muddies the design as well. I think that notions of good design are changing a bit. Honestly, I'm not sure I like all of the implications either. One thing I do know, however, is that there have been many cases where people have used final sparingly and not had any ill effects. There's a new zeal now to use final as much as possible and I don't think people are thinking about the effects, or encountering them. Maybe the majority of them aren't unit testing. Yes, you can use byte code manipulation to get out of the jam, but really is that appropriate?? It's an unnecessary barrier to entry for unit testing, in my opinion. So, I say "let's deprecate final." Not much to worry about, nothing deprecated ever seems to go away. :-) But the bigger question is: what is a more appropriate mechanism for solving the problem? As I pointed out in my blog, final when applied to classes and methods is pretty coarse; it's a little too binary. A yet bigger question is: how much security does final really give us? Something to think about..
  74. It's funny. There are some people who dislike IOC because they think it muddies the design as well.
    IoC is hallelujah-ed to heaven (and oponents are cursed to hell) but has its own issues when applied without thinking.
    I think that notions of good design are changing a bit.
    And is not something people will ever 100% agree on anyway.
    One thing I do know, however, is that there have been many cases where people have used final sparingly and not had any ill effects.
    Shielding ill effects (like breaking less clients when you change 'internals') is only half of the story. Encouraging clients to look at what is considered the 'best' solution and discuss on the mailing list if they don't agree with that or couldn't find suitable extension points is all about taking future change into account and trying to incrementally build an API that is solid and extensible. If you open up from day one, you'll probably never get those discussions going and as a framework author probably won't know half of the kind of extending people are doing. Which is a miss for the authors to make the framework better, and for the users to learn about more suitable ways to do things. And about those ill effects. My personal experience is that due to the fact that the framework I'm working on does use a lot of final classes, we were able to implement some pretty drastic internal changes with hardly any API breaks. If we had opened up too early, this would not have been possible. Honest. The interesting thing is though, that we are opening up more and more as we have more areas where we are confident about (usually due to discussions with users).
    There's a new zeal now to use final as much as possible and I don't think people are thinking about the effects, or encountering them.
    I can't speak for others, but all people that work on that framework I'm working on use final deliberately and at the same time seriously consider what the extension points should be. Funny btw that you think using final is a new hype, while an earlier statement in this thread was that hardly anyone was using it anyway. /me puts on his smart ass grin.
    Maybe the majority of them aren't unit testing. Yes, you can use byte code manipulation to get out of the jam, but really is that appropriate?? It's an unnecessary barrier to entry for unit testing, in my opinion.
    Yeah using byte code for that sucks. However, why is it so important to be able to white box test everything? I never really got into mock testing, maybe that's my problem. I like to test other parties code to the contracts they give, and if they behave accordingly, I'm mostly happy.
    So, I say "let's deprecate final." Not much to worry about, nothing deprecated ever seems to go away. :-)
    :) Don't get me started. If there is any keyword I loath/ would like to get rid of, it's deprecated.
    But the bigger question is: what is a more appropriate mechanism for solving the problem? As I pointed out in my blog, final when applied to classes and methods is pretty coarse; it's a little too binary. A yet bigger question is: how much security does final really give us? Something to think about..
    Maybe something like 'unfinal' wouldn't be so bad. I don't really have a problem with people circumventing final. It's a deliberate choice they make - they have to do a little bit more work, and thus will think about it first. unfinal would be a much easier way than the current ways, but still a deliberate choice to 'break'.
  75. I usually make all application classes final (except for persistable classes, since Hibernate wouldn't allow them to be lazy). One non-obvious benefit of final classes/methods is that it makes possible for static analysis tools to automatically warn about declared checked exceptions that are not actually thrown. You can also find unused methods which were originally protected. For unit testing (with JUnit) code that depends on final methods, static methods, or objects directly instantiated with the "new" operator, I use JMockit. Rogerio
  76. Well... I remember that the author of "Hardcore Java" highly recommends the use of final keyword.
  77. Final is good for one situations a bad for other for example public void func(final int x){ ...... x = 20; ..... } This could be an error, without final keyword find the problem could be dificult
  78. I TOTALLY agree. Me too, I used the final keyword for method parameters and found that this helped me in writing better code. For this reason, I'm teaching people (expecially unexperienced young programmers) to do the same. Ciao! Alex
  79. I used the final keyword for method parameters and found that this helped me in writing better code.
    Please explain how it helped. Please define "better". Dave Rooney Mayford Technologies
  80. Better from a design point of view. The "final" is an important key to remember that input parameters are and should always be input parameters. The return should be a new object or a collection/map of new objects. And... yes, I used to be a C/C++ programmer. That's why I like final parameters. Of course final is just one thing, maybe not even the most important, there are many more. And all of them are addressed to young programmers who would otherwise write the usual spaghetti-code (my worst nightmare). By the way, I find deprecating final has no sense. Let people who like it, use it. And people who don't like it... don't use it. ;-) Thanks for the comment, Alex
  81. Better from a design point of view. The "final" is an important key to remember that input parameters are and should always be input parameters. The return should be a new object or a collection/map of new objects.
    Again, whether the method parameters are final or not has no effect on this in Java. final != const.
  82. Doing mockuos[ Go to top ]

    One of the points of the article, the way I'm understanding is that because class A is declared final is hard to do mockups because you can not extend it. Why your mockus relay on "is a" paradigm and not "has a" (aggregation)? Is this because you may want to test say private methods as well and you can not with aggregation? ... but then again probably you shouldn't if class A is designed well and a callflow that would involve a private methid will be probably driven by a public high level access. Quite frankly I find final really useful.
  83. I've already seen one of Michael's talk on dealing with legacy code, and I have to agree with him that when you are faced with an unknown codebase, using inheritance is a really useful tool to get it under test. You should know that he does not try to make the codebase pretty, but unit-testable. My opinion in all of this discussion, is that those who use final classes should read a book about OOP, and those who declare a method final in a first release should remember that most codebases will outlive their participation in the project. Remember why we use java.util.ArrayLists? because java.util.Vector is final. That was an awful case of premature finalisation.
  84. I've already seen one of Michael's talk on dealing with legacy code, and I have to agree with him that when you are faced with an unknown codebase, using inheritance is a really useful tool to get it under test. You should know that he does not try to make the codebase pretty, but unit-testable.

    My opinion in all of this discussion, is that those who use final classes should read a book about OOP, and those who declare a method final in a first release should remember that most codebases will outlive their participation in the project.

    Remember why we use java.util.ArrayLists? because java.util.Vector is final. That was an awful case of premature finalisation.
    OOP, what about it been doing it for ten years, understand inheritance realised it's often bad - something alot of the books don't tell you. Interfaces and dependency injection are the best way to achieve good OO. Unless you'd like to tell me something different. Why do we use arraylist - because vector is synchronised if you really wanted to use vector and sublclass it then you can wrap it as per the decorator pattern. But you might want to ask why vector was declared final (maybe it was a bad decision maybe not). I've never found a good use for vectors they're fat and slow. Luv Simon
  85. OOP, what about it been doing it for ten years, understand inheritance realised it's often bad - something alot of the books don't tell you. Interfaces and dependency injection are the best way to achieve good OO. Unless you'd like to tell me something different.
    Spot on! There's an anti-pattern for this type of OOP - "Yo-Yo Logic Flow" or something like that. Deep/deepish/not-so-deep inheritance trees are a notoriously unmaintainable pain in the ass. This is why we have the Strategy Pattern, IoC etc. As far as using unit testing of crap code as an excuse to remove final... surely you'd like to isolate your lovely clean code from this crap by wrapping it in a clean non-final abstraction. Then you can mock the nice clean abstraction.
  86. Framework vs Library[ Go to top ]

    Why is it that discussions on TSS focus mainly on frameworks? Using final (for classes) in a framework is definitely a risky strategy. The exception to the rule is if you have the ability to respond quickly when a user hits a problem with final (as Wicket team described). However, none of this debate has mentioned the role of libraries - the very low level bits of code that we all depend on. The JDK has final classes like String and Long etc. Is this discussion proposing to take final away from them? Currently making them final is the only effective way to guarantee immutability. (Note, this is probably a JDK enhancement request for an immutable keyword...) In Joda-Time we have to create such immutable objects, such as DateTime. These have to be immutable, and thus have to be final. However, we still wanted to allow our users to be able to write their own classes that interacted with the API. To achieve this we have an interface ReadableInstant that is implemented by our date class DateTime, but can also be implemented by your own implementation if desired. We then ensured that all the other methods on the API took method parameters of ReadableInstant (the interface), not DateTime (our implementation). Thus, we have a simple design to provide immutable implementations in the library, but allow users the flexibility to add their own strange code if needed. In summary, I strongly disagree with the notion of removing final. It has its place and can be useful. But I would argue that if you do use it, you need to treat it as a distinct API design decision.
  87. But I would argue that if you do use it, you need to treat it as a distinct API design decision.
    +1