Mock Objects in the Real World

Discussions

News: Mock Objects in the Real World

  1. Mock Objects in the Real World (24 messages)

    Ben Teese of Shine Technologies recently finished work on a project that extensively used the Mock Objects design pattern. In doing so, he believes he "went to the very limit of what can be tested". In this article he gives his own take on exactly what Mock Objects are, describes his experiences using them on a real project and asks the question: was it worth it?

    Read more: Unit Testing With Mock Objects.

    Threaded Messages (24)

  2. Mock Objects in the Real World[ Go to top ]

    I've readed an article of greate worth by Martin Fowler on the subject. It has presented to me another point of view on mock objects. Until then, I really thought in Mock objects as Stubs.

    http://www.martinfowler.com/articles/mocksArentStubs.html
  3. Mock Objects in the Real World[ Go to top ]

    This article seems to be a case study on the usefulness of IoC (or Dependency Injection if you will). Many of the problems that the author encountered when testing with Mock Objects can be solved quickly and easily by using an IoC container like Spring or Hivemind. I found this particularly strange since the author mentioned Dependency Injection by name but doesn't actually make use of it in his project (I don't consider using Factories for object creation DI).
  4. How to do DI on Serialized Objects?[ Go to top ]

    Dependency Injection is a great tool, but lately I've been wondering how I'd do it with Serialized objects (without resorting to AOP). For instance, say you have an object that receives a Logger via Dependency Injection. Then the object gets serialized, the Logger object is transient so it is skipped. Later, the object is deserialized, but the Logger is now null and an error results. How best to deal with this?
  5. How to do DI on Serialized Objects?[ Go to top ]

    Dependency Injection is a great tool, but lately I've been wondering how I'd do it with Serialized objects (without resorting to AOP). For instance, say you have an object that receives a Logger via Dependency Injection. Then the object gets serialized, the Logger object is transient so it is skipped. Later, the object is deserialized, but the Logger is now null and an error results. How best to deal with this?

    Well DI is just a one part of IoC.
    You might want to use IoC container which does more then just DI and support more complex lifecyles which include passivation/activation like operations. Avalon Framwork defines quite rich lifecycle
    http://avalon.apache.org/framework/principals/lifecycle.html
    (even too complex imo).

    "Recompose" phase is what you are looking after.
    It should quite simple to implement such operations with the most of open source IoC containers.


    Michal
  6. How to do DI on Serialized Objects?[ Go to top ]

    Well DI is just a one part of IoC.You might want to use IoC container which does more then just DI and support more complex lifecyles which include passivation/activation like operations.

    How does the container know an object has been deserialized and needs recomposing? Ie, let's say a servlet container serializes and deserializes a session between restarts. That servlet container isn't going to know about the Avalon container or the Spring framework to tell it to recompose the objects just deserialized.
  7. How to do DI on Serialized Objects?[ Go to top ]

    Well DI is just a one part of IoC.You might want to use IoC container which does more then just DI and support more complex lifecyles which include passivation/activation like operations.
    How does the container know an object has been deserialized and needs recomposing? Ie, let's say a servlet container serializes and deserializes a session between restarts. That servlet container isn't going to know about the Avalon container or the Spring framework to tell it to recompose the objects just deserialized.

    I am not sure if I understand your problem.

    Components lives in the container and as long as there is a way to access the component instance via lookup operation
    you don't need to care about re-injecting loggers and other services into components. That's the role of the container.
    Previously I was refrering to more complex situtaion then the one you want to handle

    The only thing you ever have to care about is how to access the conatainer and its services.
    So the only problem is how to restart the conainer when web application was restarted and to detect that refernces to components which are kept in your https sessions are invalid.
    And this is not so diffrent (or identical) from starting the container for the first time.

    Michal
  8. How to do DI on Serialized Objects?[ Go to top ]

    So the only problem is how to restart the conainer when web application was restarted

    Trivial enough and already being done.
    and to detect that refernces to components which are kept in your https sessions are invalid.

    Who detects that references to components in these session objects are invalid? And what do they do about it? Should I be scanning user sessions for all objects that might contain invalid references to components? I considered that, but it seemed pretty cumbersome, with no satisfactory way of determining when a session should be so scanned. It seemed easier and less trouble to just have these objects know how to fix their invalid references themselves by grabbing the singleton container.
  9. How to do DI on Serialized Objects?[ Go to top ]

    and to detect that refernces to components which are kept in your https sessions are invalid.
    Who detects that references to components in these session objects are invalid? And what do they do about it? Should I be scanning user sessions for all objects that might contain invalid references to components? I considered that, but it seemed pretty cumbersome, with no satisfactory way of determining when a session should be so scanned. It seemed easier and less trouble to just have these objects know how to fix their invalid references themselves by grabbing the singleton container.

    You are trying to make your life harder :).
    HttpSession is a kind of cache.
    The very idea that objects kept in cache "know how to fix their invalid references themselves by grabbing the singleton container" when cache was restarted is imo quite bad. And it is bad in case of any objects. In case of conatainer managed objects it is even worst. You should possibly stay abstain from any links from components to the container and from manual execution of the
    lifecycle phases of such objects. You should be using only interfaces in your code so you even should not have an access to such methods or knowledge about them.
    It should be up to the container to mange all this
    and bring the simplicity.

    I don't have a good and direct solution for you, as it is quite generic problem.

    You may think about:
    a) Looking up components per requests (note that Action classes can be components themselves and hide a lot of low-level details)
    b) Keeping in the session only the state (memento) which is needed by components, but not components.

     

    Michal
  10. How to do DI on Serialized Objects?[ Go to top ]

    and to detect that refernces to components which are kept in your https sessions are invalid.
    Who detects that references to components in these session objects are invalid? And what do they do about it? Should I be scanning user sessions for all objects that might contain invalid references to components? I considered that, but it seemed pretty cumbersome, with no satisfactory way of determining when a session should be so scanned. It seemed easier and less trouble to just have these objects know how to fix their invalid references themselves by grabbing the singleton container.
    You are trying to make your life harder :).HttpSession is a kind of cache.The very idea that objects kept in cache "know how to fix their invalid references themselves by grabbing the singleton container" when cache was restarted is imo quite bad. And it is bad in case of any objects. In case of conatainer managed objects it is even worst. You should possibly stay abstain from any links from components to the container and from manual execution of the lifecycle phases of such objects. You should be using only interfaces in your code so you even should not have an access to such methods or knowledge about them.It should be up to the container to mange all thisand bring the simplicity.I don't have a good and direct solution for you, as it is quite generic problem.You may think about:a) Looking up components per requests (note that Action classes can be components themselves and hide a lot of low-level details)b) Keeping in the session only the state (memento) which is needed by components, but not components. Michal

    If I understand the original problem correctly, the scenario is this.

    1. the server is running and there are active sessions
    2. the webapp stores some stuff in the httpSession
    3. the objects need to be saved when the servlet container persists the sessions
    4. the objects have additional functionality like logging, etc
    5. the server crashes or is restarted
    6. the servlet container deserializes the HttpSession objects back

    so in this case, if the objects do not have any transient members, or it uses the servletcontext logger, it's probably going to be ok. I haven't tested it, so I don't know for a fact.

    what happens if the objects use a custom logging facility? I'll use tomcat, since I know it better than other containers. At startup tomcat would deploy the webapp and create any required objects, if there's a SerlvetContextListener registered in the web.xml. Tomcat reloads the persisted sessions, so the raw data is there. So the problem is this.

    1. what happens to the transient members or other references that do not implement Serializable?
    2. who should set the transient members to real instances?
    3. in the case where the stored objects in the HttpSession is an object graph, what is responsible for checking all objects are setup and ready to go?

    I don't see any problem with a singleton pattern for getting loggers, or something equivalent. In a container managed environment like an EJB container, i would agree it would be bad. it would blur the lines of who manages what and make maintenance harder. In a simple servlet container environment, the definition of life cycle is generally just a HttpSession. There's no complex setup or tear-down. One approach I've used in the past was to extend the accesslog or application log in Tomcat. that way if a persistent HttpSession is reloaded on restart, it can simply log to ServletContext.log().

    That approach doesn't work for everyone though, so how to handle more complex situations with either DI, IoC or AOP still merits discussion and exploration :) I'm not convinced that any of the approaches is better than the other and probably comes down to personal choice.
  11. persistence and service references[ Go to top ]

    If I understand the original problem correctly, the scenario is this.

    1. the server is running and there are active sessions
    2. the webapp stores some stuff in the httpSession
    3. the objects need to be saved when the servlet container persists the sessions
    4. the objects have additional functionality like logging, etc
    5. the server crashes or is restarted
    6. the servlet container deserializes the HttpSession objects back

    This is a general problem with persisting object graphs. Simply design the graph to avoid holding external object references :-).

    Seriously though - if you need to store the reference because it genuinly is state then the referenced object should also be inside the graph because it too is probably "state", OTOH if the object reference is purely a service access then you should be looking it up (via SL/IoC etc) each time you need to use it rather than storing it.

    Paul C.
  12. persistence and service references[ Go to top ]

    If I understand the original problem correctly, the scenario is this.1. the server is running and there are active sessions2. the webapp stores some stuff in the httpSession3. the objects need to be saved when the servlet container persists the sessions4. the objects have additional functionality like logging, etc5. the server crashes or is restarted6. the servlet container deserializes the HttpSession objects back
    This is a general problem with persisting object graphs. Simply design the graph to avoid holding external object references :-). Seriously though - if you need to store the reference because it genuinly is state then the referenced object should also be inside the graph because it too is probably "state", OTOH if the object reference is purely a service access then you should be looking it up (via SL/IoC etc) each time you need to use it rather than storing it.Paul C.

    That would be one approach right :) If I choose to have dumb objects that always look up references to services like logging or filtering. If on the otherhand, say I also use these beans outside Tomcat and I make the objects smarter so that is can do logging, ignoring the reasons for doing this. Now, if I try to use these beans that are not "dumb objects" in a container, what are my options? I can think of a couple of ways.

    1. make it so the objects can simply lookup the service or create it's own. which would mean I can turn it off with my own ServletContextListener
    2. make the container I use be it avalon, spring or whatever handle it

    I'm sure there's plenty of other options. If we consider a situation where I am inheriting an application from someone else and the objects already have extensive logic within them, I still need a solution for setting up the objects after the servlet container has deserialized them.

    Writing a new application from scratch I can go with the dumb objects route, but that's no gaurantee that dumb objects will meet my needs. So it's still going to be a balancing act to find a livable compromise between various approaches.
  13. Mock Objects in the Real World[ Go to top ]

    ...I found this particularly strange since the author mentioned Dependency Injection by name but doesn't actually make use of it in his project...

    I think the author made it very clear why IoC doesn't work in his case where there are too many layers. If you think about it IoC is only service locator in disguise, does it make much difference to write locator.find(name) or iocContainer.get(name)? the difference is more in how they are used - while service locator is commonly used freely in any layer, IoC is intended to be only used in the highest layer. In the author's case, that would be very inconvinient.
  14. Mock Objects in the Real World[ Go to top ]

    I think the author made it very clear why IoC doesn't work in his case where there are too many layers. If you think about it IoC is only service locator in disguise, does it make much difference to write locator.find(name) or iocContainer.get(name)? the difference is more in how they are used - while service locator is commonly used freely in any layer, IoC is intended to be only used in the highest layer. In the author's case, that would be very inconvinient.

    And this is what I'm getting at too. It becomes problematic to try to use Dependency Injection and IoC techniques for all your classes that need loggers and access to services, etc. You can only easily use these techniques on your top-level services and components. For many other classes that have dependencies (specially those with long lifecycles), they're better off getting things on their own, ie via a singleton call to the container or something similar.
  15. Mock Objects in the Real World[ Go to top ]

    I think the author made it very clear why IoC doesn't work in his case where there are too many layers. If you think about it IoC is only service locator in disguise, does it make much difference to write locator.find(name) or iocContainer.get(name)? the difference is more in how they are used - while service locator is commonly used freely in any layer, IoC is intended to be only used in the highest layer. In the author's case, that would be very inconvinient.
    I don't agree that IoC is only useful for managing dependencies in the so-called "highest" layer. IoC is a really basic idea... it simply glues your objects together. It is still up to the developer to properly decompose their object models.

    I don't see using IoC as any more inconvenient than what the author does... which in some cases is to define interfaces and factories whose sole purpose is to mock out a single method call. Read the section titled "Testing Methods that Call Other Methods", mocking to that low of a granularity is inconvenient no matter how you slice it and leads me to believe that maybe an important abstraction has been missed.
  16. Mock Objects in the Real World[ Go to top ]

    IoC ... simply glues your objects together.

    How is an IoC container going to glue together my objects that have been automatically deserialized by the servlet container on startup?
  17. Mock Objects in the Real World[ Go to top ]

    ...I found this particularly strange since the author mentioned Dependency Injection by name but doesn't actually make use of it in his project...
    I think the author made it very clear why IoC doesn't work in his case where there are too many layers.

    Actually, I thought the author was very confusing on this point. DI doesn't involve passing interface implementations through multiple levels of method calls, as he implies. It involves defining the implementations in an external source, and allowing the container to configure the implementations. If he was passing the implementations as method parameters, then he wasn't doing DI.

    Also, the author sung the praises of mock object for allowing him to unit test classes in isolation. If he is testing them in isolation, why would he need to send his services through multiple levels of method calls?
  18. DI != Service Locator[ Go to top ]

    I think the author made it very clear why IoC doesn't work in his case where there are too many layers. If you think about it IoC is only service locator in disguise, does it make much difference to write locator.find(name) or iocContainer.get(name)?

    I think you're missing the point of inversion of control. With IoC, you hardly ever call iocContainer.get(name). Instead, control is *inverted* - the DI framework injects the service into your component. The 'Hollywood principle' - don't call us, we'll call (inject stuff into) you. Admittedly, most intro atricles on DI / IoC frameworks are misleading on this point, as the first thing they show you is how to make the call to iocContainer.get(name). But actually that call should be buried in one or two places of your infrastructure code. Application code shouldn't need to depend on or make calls to the DI container.

    In PicoContainer, this is know as the 'Container Dependency Anti-Pattern':

    http://www.picocontainer.org/Container+Dependency
    the difference is more in how they are used - while service locator is commonly used freely in any layer, IoC is intended to be only used in the highest layer.

    Not necessarily. The highest level component starts the world, so to speak, but the chain of dependencies instantiated by the DI container can extend as far, or as low level, as you want.
  19. Mock Objects in the Real World[ Go to top ]

    I recently did a small Spring based project for an old employer of mine as a contract job.

    In this project I created a DAO Factory where it allowed me to overide the DAO. In each unit test I created Mock DAO's and set up the Factory as appropriate for each JUnit test. Dependency Injection was used to give each object that required it the DAO Factory.

    I thought another way of doing this would be get rid of the DAOFactory and configure Spring from my Unit Test to override the Bean Factory definition and inject a mock DAO directly (IMHO much better design). I was under a tight deadline and could not find any good documentation of how to configure the bean factory in run time once it had been read from an XML file. Therefore I went with the DAO Factory.

    Are there any good docs out there that address mocking strategies in Spring? I have yet to investigate this more.
  20. Use the BeanFactory, then ...[ Go to top ]

    ... inject your mock implementations where necessary, in the unit test itself.

    We have our own static BeanFactory wrapper for Spring's BeanFactory. In our unit tests, we oftentimes will just call the BeanFactory.getBean("mybean") method to get the object under test, with all it's "production" relationships established, then call the setters for the relationships that we want to mock. It's kind of the best of both worlds (between always hand-wiring in the unit test, or having a separate XML config for testing), but it does slow down the unit tests if you have a large configuration (as we do).

    We use a combination of "local" (anonymous inner class) mocks, "formal" mocks (hand-written, top-level classes), and the EasyMock library to provide mock objects in our tests.

    HTH, or if anyone has a better practice, would be cool to hear...
  21. Use the BeanFactory, then ...[ Go to top ]

    Thanks for the thoughts on the subject :-)
  22. Mock Objects in the Real World[ Go to top ]

    I found this particularly strange since the author mentioned Dependency Injection by name but doesn't actually make use of it in his project (I don't consider using Factories for object creation DI).

    Just to clarify: I did use dependency injection but under some circumstances it became impractical - for example, if objects were being passed a long way down the call stack without being used enroute. Under such circumstances I used a service locator instead.
  23. Mock Objects in the Real World[ Go to top ]

    Just to clarify: I did use dependency injection but under some circumstances it became impractical - for example, if objects were being passed a long way down the call stack without being used enroute. Under such circumstances I used a service locator instead.

    I was also recently involved in a project where we pushed mock objects to the limit, but I found DI extremely helpful. We didn't really have the problem you described where objects were being passed a long way down the call stack. The DI framework (Pico in this case) drove us towards a more decoupled design where each object only had a few dependencies. And instead of passing an object down the chain, we just let Pico chase down the dependencies.

    For example, we had a Struts action, which dependended on a service class, which dependend on a DAO, which depended on a Hibernate session object. We did *not* inject the session into the Struts action and pass it all the way down the chain. We just wrote a constructor for the action that took the service interface as it's only parameter. The service implementation class had a constructor that took the DAO as a parameter. And the DAO had a constructor that took the session as a parameter. Each class had exactly one dependency. Unit testing the action was easy, as we only had to mock out one object, the service. Unit testing the service was easy, as we only had to mock out one object, the DAO. Etc. When Pico instantiates the action, it first walks the dependency chain and instantiates the DAO (passing the session), then the service, and then finally instantiating the action. But Pico figures all that out; we don't have to worry about it in our code. And the DI framework made it easy to write all of this test first, without having to worry about what was needed by components several layers lower. Constructor injection works really well here. With CI, if you can instantiate the object in a unit test, then you know you've satisfied all of its dependencies. The object is constructed in a valid state. With a service locator, or setter injection, you can get bit with missed dependencies (as you described).

    So to me, DI makes it a lot easier to do TDD with mock objects; service locator makes it harder (but still possible).

    I do agree, however, that at a certain point, the classes and unit tests can get so fined grained that you're not really testing that the components work together as intended when integrated. We found we needed to write higher level integration or sub-system tests to really make sure that everything was working together to meet the requirements. And in some cases, we felt that those higher level tests replaced the need for some of the mindless unit tests.

    I also agree about the interface clutter issue. In my opinion, interfaces are great but overused they can cause too much conceptual clutter. Unfortunately mock objects do work better with interfaces.

    Steve
  24. I've just been working on seemingly exactly the same sort of problem: reading XML files to update a database, and trying to test the business logic using Mock Objects.

    I used EasyMock. Although there is not much activity on that project, it does seem to be pretty complete and useful, at least for my purposes.

    You've probably seen EasyMock, but just in case, here's a short translation of some of your example:

    MockControl control = MockControl.createStrict(Database.class);
    Database mock = (Database) control.getMock();
    // ...

    // set up expectations:
    // to expect a method call:
    mock.insert("Ben Teese");
    // or to expect and return:
    control.expectAndReturn(mock.getNames(), new String[] {"Ben Teese"});

    // ...

    // now run test:
    control.replay();
    new Processor(mock).doStuff(1);

    // verify results:
    control.verify();

    EasyMock uses Java Dynamic Proxies, so that the expectation calls to the mock object look just like the real calls. It's pretty cool. I think there is a CGLIB variant that can mock objects, rather than interfaces, too, but I haven't tried that.

    The biggest problem we hit with EasyMock occurred when we were trying to use it in "default" mode rather than "strict" mode. In strict mode the order of calls is significant, and exact matches of all parameters are required, in order. In default mode the order of method calls is not significant. EasyMock does something weird in this case to try to associate the (unordered) expected calls with the actual calls. Somewhere along the line it uses toString() on the method parameters, and uses the result in a compareTo() for a sorted set. This caused me problems when I'd use parameters that were equal according to equals(), but gave a different toString() representation. (I include the object ID in toString().)

    However, when I switched to strict mode, which is actually the more correct mode for my application, this problem disappeared.

    John Hurst
    Wellington, New Zealand
  25. 3 Levels of testing[ Go to top ]

    This article pretty much concurs with what I've found when using Mock Objects. I've come to categorise the different tests I write into 3 main areas:

      * Node tests - these invariably use DI and mock objects. This helps drive out the interactions between service layers (imagine writing an HTTP handler that takes HTTP requests and emits TCP/IP packets).

      * Tree tests - these test a node in the tree, but inject real implementations, rather than mock implementations. This is more of a black box test, where I don't make assumptions about the internal workings of a subsystem.

      * Path tests - these test particular business stories, and help confirm that the system brings business value. These paths use real implementations (although there may be some stub implementation for external systems)

    For node tests I tend to use EasyMock because it makes refactoring easier. Mock object libraries which use a custom language rather than direct manipulation of interfaces tend to get confused when I change the name of a method - you only get a breakage when you run the test, not at compile time.