Testability Blog
      Explorer           Blog        
 




July 2008
June 2008
May 2008

How to Think About the "new" Operator with Respect to Unit Testing

By Miško Hevery
  • Unit Testing as the name implies asks you to test a Class (Unit) in isolation.
  • If your code mixes Object Construction with Logic you will never be able to achieve isolation.
  • In order to unit-test you need to separate object graph construction from the application logic into two different classes
  • The end goal is to have either: classes with logic OR classes with "new" operators.


Unit-Testing as the name implies is testing of a Unit (most likely a Class) in isolation. Suppose you have a class House. In your JUnit test-case you simply instantiate the class House, set it to a particular state, call the method you want to test and then assert that the class' final state is what you would expect. Simple stuff really...

class House {
  private boolean isLocked;

  private boolean isLocked() {
    return isLocked;
  }

  private boolean lock() {
    isLocked = true;
  }
}

If you look at House closely you will realize that this class is a leaf of your application. By leaf I mean that it is the end of the dependency graph, it does not reference any other classes. As such all leafs of any code base are easy to tests, because the dependency graph ends with the leaf. But testing classes which are not leafs can be a problem because we may not be able to instantiate the class in isolation.

class ApplicationBuilder {
  House build() {
    return new House(
             new Kitchen(new Sink(), new Dishwasher(), new Refrigerator())
           );
  }
}
  

In this updated version of House it is not possible to instantiate House without the Kitchen. The reason for this is that the new operator of Kitchen is embedded within the House logic and there is nothing we can do in a test to prevent the Kitchen from getting instantiated. We say that we are mixing the concern of application instantiation with concern of application logic. In order to achieve true unit testing we need to instantiate real House with a fake Kitchen so that we can unit-test the House in isolation.

class House {
  private final Kitchen kitchen;
  private boolean isLocked;

   public House(Kitchen kitchen) {
    this.kitchen = kitchen;
  }

  private boolean isLocked() {
    return isLocked;
  }

  private boolean lock() {
    kitchen.lock();
    isLocked = true;
  }
}

Notice how we have removed the new operator from the application logic. This makes testing easy. To test we simply new-up a real House and use a mocking framework to create a fake Kitchen. This way we can still test House in isolation even if it is not a leaf of an application graph.

But where have the new operators gone? Well, we need a factory object which is responsible for instantiating the whole object graph of the application. An example of what such an object may look like is below. Notice how all of the new operators from your applications migrate here.

class ApplicationBuilder {
  House build() {
    return new House(
             new Kitchen(new Sink(), new Dishwasher(), new Refrigerator())
           );
  }
}

As a result your main method simply asks the ApplicationBuilder to construct the object graph for you application and then fires of the application by calling a method which does work.

class Main {
  public static void(String...args) {
    House house = new ApplicationBuilder().build();
    house.lock();
  }
}

Asking for your dependencies instead of constructing them withing the application logic is called "Dependency Injection" and is nothing new in the unit-testing world. But the reason why Dependency Injection is so important is that within unit-tests you want to test a small subset of your application. The requirement is that you can construct that small subset of the application independently of the whole system. If you mix application logic with graph construction (the new operator) unit-testing becomes impossible for anything but the leaf nodes in your application. Without Dependency Injection the only kind of testing you can do is scenario-testing, where you instantiate the whole application and than pretend to be the user in some automated way.

Add Comment

Testing UI - part 1

Lately I have been getting a lot of questions on how to test User Interface(UI) code. People always claim that UI testing is very hard or even that it is not possible. I think that with the right kind of design, UI testing is just as easy as testing any other piece of code. Let me show you how I do unit-testing of UI in Adobe FLEX which uses ActionScript as its programming language.

Lets say we wish to test a common UI component such as a login page.

The important thing is to separate the graphical UI from the control logic and data. This can be achieved with the standard Model View Controller design pattern. Where Model is the data (username/password), View is the visual components (TextField, Button) and Controller is what glues the pieces into an interactive UI (What happens when I click Login button.) However, from testing point of view there is one important caveat which can not be broken! The source code dependence must be expressed in following order.

View -> Controller -> Model

In other words Controller and Model can never know about the View! Neither direct or transitive dependencies are allowed. (i.e. Controller knows about X and X knows about View is just as bad as Controller knows about View). Similarly the Controller knows about the Model but model does not know about the Controller (although that requirement is not as strict.) Often times I merge the Model and the Controller into a single class if I don't expect any reuse of the Model. Such as in this case of a login page.

Lets start with a Controller/Model

package example.flextesting {
  [Bindable]
  public class LoginPage {
    
    public var username:String;
    public var password:String;
    public var showError:Boolean;
    
    public var authenticator:Function;
    
    public function login():void {
      showError = authenticator(username, password);
    }

  }
}

Notice how closely the Controller mimics the actual UI. Each entry field gets a field, each UI state (showError) gets its field as well, and finally each action gets a method. Also notice the [Bindable] annotation which allows any class to listen to modification in object state. In out case we want the View to be able to listen to state changes of the Controller without the controller explicitly knowing about the View.

Now that we have a Controller lets look at the View:


xmlns:mx="http://www.adobe.com/2006/mxml"
xmlns:flextesting="example.flextesting.*">



change="controller.username = event.currentTarget.text"/>


change="controller.password = event.currentTarget.text"
displayAsPassword="true"/>








Login   Login with Error

Notice that the View has a direct access to the controller Also notice that all of the TextInputs are bound to the corresponding fields on the Controller (In ActionScript the {controller.username} mans that the value is bound at runtime to the destination. Meaning any changes in username/password will be reflected in the text fields.) Because ActionScript data binding is not bidirectional, we also register change events on the TextInput which which copy any changes in the UI back to the controller. We then bind the "Login" button to the Controller login() method. Finally we bind the error message "Login Failed" visibility to controler.showError.

All this binding achieves that the Controller is fully separated from the view. So from now we can forget about the view and just worry about testing the Controller. Now many people will argue that I still can have errors in the wiring / binding process. True, but from my personal experience most errors are in the logic not in the boring wiring code. The wiring code either is broken and it does not compile or it compiles and chances are it is right. By ignoring the wiring and the graphical portion of the UI It is unlikely that I have left too many bugs in the code. Also even if I test the View I still don't know if it "looks right" which only a human can do. So I simply take a very pragmatic approach and draw the line at the View. I get 90% of benefits with very little cost. Turns out that there are scenario based frameworks out there which will allow you to write test with full View code coverage, but those are not unit-tests and hence I will not go into them here.

As you may have guessed the Controller will mimic the View very closely. This is actually very desirable as you don't want to have "impendence mismatch" when trying to do the wiring. Any "impendence mismatch" will result in marshaling code which may turn your simple binding problem into a hidden controller and hence moves logic from the true home of Controller into the bindings/view which is undesirable.

Lets see how the above helps testing as this very simple test shows...

package example.flextesting {
  import flexunit.framework.TestCase;

  public class LoginPageTest extends TestCase {
    
    public function testLogin():void {
      var loginPage:LoginPage = new LoginPage();
      loginPage.username = "user";
      loginPage.password = "pass";

      var log:String;
      loginPage.authenticator = function(u,p) {
        log = u + "/" + p;
        return true;
      }

      assertEquals("user/pass", log);
      assertTrue(loginPage.showError);
    }
    
  }
}

Notice the since the Controller mimics the View the Controller forms a kind of a domain-specific-language (DSL) which is actually very useful in scripting serration tests and also in understanding what the test is doing.

Finally lets look at one last thing, and that is how the whole thing is wired up. Your controller will need to collaborate with your application service objects. This implies that the control is dependency-injection (DI) heavy and should therefore be injected into the view. As usual you will need a single top level factory which instantiates all of the services, Controllers, Views and service and then inject all of the references into appropriate places. Here is the FLEX equivalent of the "main method".









Without going too much into the details of FLEX the tag element is equivalent to the new operator. So

is same as

var loginPage:LoginPage = new LoginPage();
loginPage.authenticator = authenticator;

Therefore the above example is the place where all of the components get instantiated and the references get passed around to appropriate objects. (Good old dependency-injection).

-- Misko Hevery

Add Comment

Unit-Tests way of thinking

A good definition of unit tests is a test which 1) runs fast <5ms and 2) when it fails you can determine what is wrong without resorting to a debugger. This implies that a unit-tests must execute very limited amount of code so that the failures are well isolated. The ability to be able to execute any piece of application code in isolation is a style of programming which is not immediately obvious.

Unit-testing is not the only way to test. The other kind is scenario testing. In scenario testing it is not a requirement that each piece of functionality can be executed in an isolation, because in scenario testing we are pretending that we are a user. Testability Explorer measures how unit-testable the product is. This implies that one could have a lot of scenario tests and stills score high on unit-tests cost.

Let's see if we could validate the theory. Here is a list of projects which cater to unit-testing, hence I would expect that the authors understand the nuances of unit-testing and as a result these projects should score very low on testability cost. On the other hand authors of projects which focus on scenario testing would be less likely to use unit-tests in their products. Bellow is a list of products in the order in which they help unit-testing and how they score on testability score.

  • JUnit [cost=40]: the unit-testing framework from the fathers of unit testing.
  • PicoContainer [cost=15] / [cost=15] / Spring [cost=60]: dependency injection frameworks focus directly on unit-testing hence one would expect that they themselves are very unit-testable.
  • [cost=30]: A continuous build system which is used to run your unit tests on every check in.
  • [cost=6]: Frameworks whose goal is to make web-apps easy to test. A very impressive cost of 6.
  • HtmlUnit [cost=80]/ HttpUnit [cost=200]: Scenario based testing frameworks. Notice the higher costs of HttpUnit.

First notice that with the exception of HttpUnit all projects are bellow 100 on testability cost. This is a great number since overall average for all open source projects is somewhere in the vicinity of 3000. This implies that people who write tools which aid in testing themselves take testing seriously. However notice the difference between HttpUnit / HtmlUnit and and everything else. HttpUnit is much higher then all other testing related projects. My theory is that since HttpUnit caters to scenario testing the authors of this tool test it in scenario based way. This would make sense since the tool is meant for scenario based testing.

-- Miško Hevery

Add Comment

AspectJ is better then AspectWerkz

In theory I know what Aspect-Oriented-Programing (AOP) is. In practice I have never used AOP! (AspectJ, AspectWerkz nor any other AOP framework). Nevertheless I am going to go out on a limb and make a bold statement. AspectJ is way better then ! I would love to hear about any anecdotal evidence which can support or refute my claim. So why do I think so?

My hypothesis is: AspectJ was written with tests and hence is written in a more testable manner. Over time the advantage of test was that the code base was more malleable and hence AspectJ ended up with more features and so has won the AOP war.

Why am I picking on AspectJ vs AspectWerkz? Well because they are two projects with identical goals but very different outcomes. As such I think we can compare them and learn from their differences.

AspectJ is more testable then AspectWerkz. How hard would it be to write a unit-test for any one of the classes in AspectJ or AspectWerkz? Look at the graph below and you will see that AspecttWerkz has all of the dots above the score of 2000 in testability-cost. While in AspectJ case all dots are below 2000 (and to the right) in testability-score. The simples way to think about the scores is that to write a unit-tests for AspectWerkz I would encounter an average of 5000 IFs per unit-test. While I would encounter about 1800 IFs per unit test in AspectJ. The AspectJ test would be a lot more focused (closer to unit-tests) then AspectWerkz test would be. It also means that a failure in AspectJ will be easier to diagnose since the tests are a lot smaller.

Is it not true that any large application becomes hard to test? Well look at the final size of the JAR or the number of classes in the JAR. As you can see AspectJ has a lot more of both. Hence I would assume that it has a lot more features as well. If all large applications would be hard to test we would expect a linear relationship between size and testability-cost. Something which we do not see in the graph bellow.

How do you know your hypothesis is right? I don't know since I have never used either product hence I don't know how hard would it be to use it or how many bugs I would encounter. But simple search on Google reveals that AspectJ is a lot more popular.

-- Miško Hevery

Add Comment

Welcome to the Surface

Last year I noticed that I could refactor a piece of code and make it better without actually knowing what the code does. This made me realize that the goodness of the code is independent of its function. I also realized that when I was looking at code to refactor I went thought a mental check-list of known red-flags and specific refactorings as an antidote to those red-flags.

So I had an idea to write a bytecode analysis tool which would identify the red-flags for me. The motivation was that it is hard to teach people how to write quality tests if the code base is not testable. One only learns how to write quality tests after one has written lots of tests. And it is hard to write a lot of tests if your code is hard to test. So I had an idea:

"Perhaps I can't make developers to write tests, but perhaps I could prevent developer from getting into untestable mess by avoiding the red flags. This way even if they don't write tests today, they could write tests tomorrow."

The idea was to build the and then to integrate it into project's continuous build so that whenever a developer tried to check in some hard to tests code the would flag the pieces of the code as hard to test and then fail the build. However, I wanted to make sure that the would not only fail the build but also give useful suggestions as to which lines of code are a problem and how to refactor them. Without this feedback the developers would only get frustrated, with the feedback the developers would learn why a particular way of coding is a problem, from a testing point of view. This way the code-base would always remain in easy to test state. And even if developers were not writing tests today, nothing would prevent them from writing the tests tomorrow.

Once we have built the we realized that we had a chance to change not just our developers but a whole open source community and the idea of http://TestabilityExplorer.org the web site was born. Here we publish the testability reports for most major Java open-source projects. We hope that this tool will be a flashlight into which open-source projects are doing a great job and should get recognized for their hard work and also we hope that we can offer suggestions on how to refactor those projects which are less then ideal from unit-testing point of view.

-- Miško Hevery

Add Comment
Questions & Comments: Contact
Metrics generated using
Subscribe to hear about updates: or email