Browsing all articles from August, 2009

Gaps I found while dog-fooding Typemock Isolator

Author Eli Lopian    Category Product, Unit Tests     Tags

image I have been dog-fooding Typemock Isolator with the Metric Dashboard. There are quite a few gaps in the product that I have found while using it.

Before I go into the details of the gaps, I must point out that all these features exists in the older API’s (that are still available), but they are missing in the newer lambda-API’s (called AAA) that are much better.

Firing Events

This is not an Isolation feature per-se, but it is needed to simulate how our unit under test reacts to external events. In my case I needed to test the TestDriven.NET integration, the TestDriven add in can raise events when ever a test suite is run, and whenever a test completes (Thank Jamie).

Verifying the arguments of a constructor

One of Typemock Isolators unique features is the ability to fake objects that are created within the production code, we call these future instances. There was one case where I wanted to test that a constructor was called with a specific argument. The instance was a class that watched a specific directory for new files and load that file. I tested that class in another test, now I just wanted to test that the class was watching the correct directory. i.e. that the following was called

var watcher = new FileWatcher(testOutputDirectory);

I could of course change my code to have an empty constructor and to set the output directory after that

var watcher = new FileWatcher();

But this felt unnatural, there is no reason that the FileWatcher be called without the directory – leading to more logic – testing that the user called WatchFolder – and complicating the application

Custom Argument Verification

Although we do have an API to verify if a call with exact argument was made, I needed to test that a specific argument was passed, but I couldn’t use the exact arguments API. The reason is that the argument was a class with 2 properties, but the Equals method was overloaded to test only one property, and I had to test both properties. I needed a custom checker. Ohad, told me that this existed but is undocumented in the NonPublic api (Api’s for private members)


We have put a lot of thought in the sequencing logic of the Api’s to make it easy to read and write the test on one hand, but keep the test Robust on the other (Robust as in, the test passes when the production implementation changes, but not the logic). In this test I needed to call an API to setup the data, but the date needed to be yesterday. Then with the data acting as normal, I needed to call the code and verify that a new record was added.

// pretend we started yesterday
var yesterday = DateTime.Now.AddDays(-1);
var underTest = new DataModel();

// back to today
// This should add a new record for today.

This didn’t work and I had to put both the Isolate lines together:

// pretend we started yesterday
var yesterday = DateTime.Now.AddDays(-1);
// back to today

var underTest = new DataModel();
// This should add a new record for today.

This is bad since changing the implementation to call DateTime.Now twice in the constructor will fail the test.

Assert the times a method was called

While testing the save logic, I need to test for a race condition and make sure that the save is called only once. The save logic is called from the calculation logic that is periodically called (say once a second). Once the auto-save interval is reached a save is performed in another Thread (to make sure that the application is responsive).

var saveInterval = OptionsSettings.Settings.AutoSaveEveryMinutes;
if (stopWatchForSave.Elapsed.TotalMinutes >= saveInterval)
  ThreadPool.QueueUserWorkItem(new WaitCallback(t=>
    stopWatchForSave = Stopwatch.StartNew();

But if the Save takes too long and the method is called again, the save will be called again.

To test this I need to make sure that the Save method is called only once, but this API is missing.

Bonus points goes to those who know how to solve this.


Unit Testing the Metric Dashboard – Part 4

Author Eli Lopian    Category Unit Tests     Tags

Continuation of unit testing the Metric Dashboard.

Debugging via Unit Tests

Sometimes, it is necessary to debug our code from our unit tests. Having the unit tests are great as they setup the scenario for us. Here is what I saw when debugging a test.


Then I remembered – the colored line around the method is a flag that the method is currently faked, that is great, I know exactly what is happening and can fix the bug quickly.


Unit Testing the Metric Dashboard – Part 3

Author Eli Lopian    Category Unit Tests     Tags

Continuation of unit testing the Metric Dashboard.

Watching for new files

The Dashboard listens to the test result folder for new files. As this is done in another thread, the test must wait for the file to be read and processed in order to verify that the feature works. In the older versions of Typemock there was a VerifyWithTimeout api. Here is a hack to do the same with the lambda API’s. This test is actually an interesting test

Unit testing

  1. I delete the file before copying just to be sure.
  2. Here I am faking the current date to be the same as the date of the test – this is so that I get the correct value from CurrentProtection
  3. Hack to Wait for Load() to be called. In this hack I set an event once the method has been called and then I call the real Load() method.
  4. Copy file to watched location
  5. Wait for Load to be called
  6. Here a short wait is needed for the method to complete.

Unit Testing the Metric Dashboard – Part 2

Author Eli Lopian    Category Unit Tests     Tags

This is a continuation of the unit testing the Metric Dashboard series, you can find part 1 here

Passing a fake object to constructor

At one point I had to pass a fake object into the constructor. Here I used a feature that will be available in our next versions. This feature currently called intelliTest makes it much easier to create fake objects without needing to go back and forth in the code.

Stage one – when in a correct position the intelliTest window appears


Clicking it opens a new window with the options of types to fake


Choosing the Fake, will:

  • Create the fake in the test code
  • Insert the local variable in the method
  • Add the [Isolated] attribute to the test
  • Add the correct using statements and
  • Add the correct references to the project


It is really great fun using this feature as it take away all the plumbing work and sets everything up for me.


Unit Testing the Metric Dashboard – Part I

Author Eli Lopian    Category Unit Tests     Tags

I have talked about unit testing the Metric Dashboard, today I had some time to dive into the task.

I decided to test the Bugs Caught feature of the Visual Studio Addin, I caught two bugs, the first has to do with the order of reading the trx files, the second had to do with saving the unfixed unit tests between sessions so that we don’t re-count failed test that are run multiple times.

Using Files in Unit Tests

I decided to use real trx files in the unit test (I found several different ways to do this, using real files seemed the easiest). This is what the file structure looks like


To read the file the tests I had to either deploy the files or do the following.



Unit Testing the Metric Dashboard

Author Eli Lopian    Category Unit Tests     Tags

While developing the Metric Dashboard, I on purposely did not write any unit tests. I wanted to feel and remember what is like to develop without unit test and see how to write the unit tests after the deed :-)

Manually Unit Testing

The first issue that I had with developing is that I found myself running the code- either with the debugger or without – much more then usual – lets call this Manual Unit Testing . There where times when I know that I was wasting my time, and manually rerunning the same scenarios over and over again. I felt the urge to write a unit test as I knew that I was wasting my time. But the feeling that I had was always: I am nearly there, I’m just about to fix the bug, so why waste time writing a unit test. Most times, I wasn’t there and it took many runs and setups just to reach the correct scenario.

If I had the Metric Dashboard running I would have seen this  


That is 32% of my time in the debugger and running the application, and 68% of my time authoring production code.

Seeing this is a great indicator that I can do better. because after all this work, I still have ZERO tests to cover me.

Application Architecture

I am going to start writing unit tests, but beforehand lets give a short architecture overview.


There are 3 components

Typemock Server

This component is responsible to store the metrics of the team and publish them to the dashboard

  • The Server is a Windows Service
  • It broadcasts itself to the world so that clients can discover servers automatically and free them from manually typing in the servers address
  • It merges and stores the data

Managers Dashboard

This component is the reporting component and allows managers and team leader to see how unit testing helped the team (by catching bugs early on) and by spending less time doing manual unit testing. Using this dashboard we can see how much time is being put into writing unit tests and how useful it really is.

  • Auto-Finder, discovers the Typemock Server and then automatically connects to it.
  • Data Filtering is done in the Dashboard via different criteria’s
  • Export the data as Excel
  • Charts and Pies

Visual Studio Add-in

This component tracks the unit testing effort per each solution and saves it locally and on the Typemock Server. Using this each develop can see how much time is being invested in unit tests and the immediate benefits.

  • Auto-Finder, discovers the Typemock Server and then automatically connects to it.
  • Current Action, discovers what the developer is doing. Debugging, Running the application, Writing unit tests, Writing production code, Idle or in another application.
  • Bugs caught, reads the test run data and calculates the numbers of bugs found. i.e. the number of failing tests. This component counts each failed test once until it is fixed.
  • Charts and Pies.

What do the colors mean?
The colors are:

  • Yellow: Communication aspects
  • Green: Data aspects
  • Charts: UI aspects
  • White: others

Where do I start?

So now that I want to get going, my biggest question is where do I start?


Bug Fix Time not a good metric

After a few weeks using the BugFixTime metric, I found that metric too hard to understand and leaves the developer and managers clueless to what that have to do to fix the metric.

We have done some internal thinking and some feedback and have created the next generation of this tool built to help teams develop with integrity


With this tool we can see the percent of time we are spending writing unit tests,  what percent we spend debugging our application and what is left writing production code.

The theory is that we debug our code when there is a bug, but when this is done without a unit test, then we are Manually testing, and the time we spend doing this is longer (we have to setup the environment) then doing this via a unit test, and is not as cost beneficial as writing a test – a unit that can be automatically run.

We found that when developing with unit tests, the percent of time spent debugging drops drastically and that time is spend writing the unit tests. But we get more value for our buck – we get a security net of automatically tested code.

In other words – we end up spend about the same amount of time writing production code, but we get better quality and so we need to spend less time in the integration/system testing phase.


We are able to see the metrics over time and see how much are improving.

The metrics are connected to a Typemock Server that enables us to see the total amounts for our teams, and show how much time we are spending unit testing, how much time we are saving from debugging, and how much the tests are protecting us.

Here is what the Team view looks like


Currently the tool works for msTest and is tested for Visual Studio 2008.