jimmy keen

on .NET, C# and unit testing

Getting started with JavaScript unit testing – Node, Jasmine, Karma and TDD on Windows

March 12, 2015 | tags: unit-testing javascript karma jasmine node.js windows


I’m working on a .NET regex tutorial and I thought it would be nice to have sort of interactive-try-it-yourself examples embedded within blog post. This sounds like a job for JavaScript, right? Simple enough. The only issue is, my JavaScript knowledge and experience are virtually non-existent. What do I do? I’ll start with a test!

JavaScript and unit testing

What are the unit testing options when JavaScript is concerned? To start we need two things - test runner and assertion library. This StackOverflow question provides decent overview on what’s available. It turns out all we need is Jasmine which is both test runner and BDD framework, supporting BDD-style of writing tests (or rather specs).

Installing Jasmine on Windows

  1. Download and install node.js (it comes as standard Windows .msi installer )
  2. Once it’s done, type the following in the command line to see whether node’s package manager (npm) was successfully installed (we’ll use npm to download further modules):

> npm --version


Now we only need few more modules: Yeoman, Bower and Generator-Jasmine. Type following in console:

> npm install -g yo

> npm install -g bower

> npm install -g generator-jasmine

The -g switch tells npm to install packages in node’s global modules directory (rather than locally within your project’s directories).

To finalize testing environment setup, we need to scaffold Jasmine’s tests directory. To do that, we’ll navigate to project directory and use Yeoman’s yo tool:

> yo jasmine

This will create test directory with index.html and spec/test.js files, which will be of your primary interest.

Running first test

The index.html is Jasmine’s test runner – open it in browser and your tests will run. “How? What tests?” you might ask. Let’s take a quick look and index.html:

<!-- include source files here... -->

<!-- include spec files here... -->
<script src="spec/test.js"></script>

We simply need to reference our implementation and test files:

<!-- include source files here... -->
<script src="../regex-highlighter.js"></script>

<!-- include spec files here... -->
<script src="spec/regex-highlighter-tests.js"></script>

What’s next? First test, obviously. Since this is super-fresh environment our first test for highlightMatches function is going to be trivial, requiring the implementation to only return value:

'use strict';

(function () {
    describe('highlightMatches', function () {
        it('should return "Success"', function () {
          expect(highlightMatches('x', 'y')).toBe('Success');

Explanation of Jasmine’s methods and BDD-style can be found at Jasmine Introduction page. Without further due, we add equally simple implementation of highlightMatcher function, refresh index.html and Jasmine is happy to announce that our first JavaScript test is very successful one:

First successful JavaScript test with Jasmine

Introducing Karma

Our current setup is up and working and we might just as well be done here. But there is one more thing that will help us greatly when developing JavaScript code – Karma. It is a test runner which watches over our files and runs all tests whenever we make any changes in source files. Perfect match for TDD/BDD environment! You can view introductory video at Youtube (14:51) (don’t get confused – tutorial talks about Testacular, which was Karma’s original name while ago).

To get it we need to execute the following (the karma-cli is Karma’s command line interface module):

> npm install -g karma

> npm install -g karma-cli

Next, navigate to project directory and initialize configuration. Karma will “ask” few simple questions and basing on your answers it will generate config file (js.config.js):

> karma init jk.config.js

Which testing framework do you want to use ?

Press tab to list possible options. Enter to move to the next question.

> jasmine


What is the location of your source and test files ?

You can use glob patterns, eg. "js/*.js" or "test/**/*Spec.js".

Enter empty string to move to the next question.

> ../regex-highlighter.js

> spec/regex-highlighter-tests.js


Configuration is ready. All that’s left to do is simply run Karma, passing configuration file name as argument:

> karma start jk.config.js

Everything should be find and we’ll be greeted with message similar to the one below:

Successful Karma setup

Modifying test to make it fail will get noticed immediately:

Failing test


To get started with JavaScript unit testing you need to:

  1. Install node.js
  2. (optional) Install Jasmine, Yeoman and Bower npm install -g yo bower generator-jasmine (this trio isn’t needed when you use Karma – Karma will take care of dependencies on its own)
  3. (optional) Scaffold Jasmine test directory yo jasmine
  4. (optional) Run first test by opening Jasmine’s index.html
  5. Install Karma npm install -g karma karma-cli
  6. Configure Karma karma init <config>
  7. Start Karma karma start <config>

Logging test results with NUnit

February 28, 2015 | tags: unit-testing nunit extensions logging design

Recently, a question popped at StackOverflow asking what needed to be done in order to custom-log unit test failures. Not many people know that, but NUnit offers extensions API which could be utilized to solve this very problem. In this post, we’ll see how.

NUnit Addins API

To extend NUnit we need to implement an addin listening to events NUnit triggers during different stages of tests execution. Our response to such events (preferably test finished event) will be logging some data to a file. As simple as that. Let’s see what do we got:

  • IAddin interface & NUnitAddinAttribute – these two will be used to “introduce” our addin to NUnit and make sure it is loaded and present during tests execution
  • EventListener – this interface (yes, an interface) will be our primary implementation doing actual logging when some test-related event occurs

All the components we need are available as NUnit.AddinsDependencies package, available on NuGet.


1. Detection

In order for NUnit to detect our addin we need to mark class implementing it with NUnitAddinAttribute and implement IAddin interface:

  Name = "File Logger",
  Description = "Writes test result to file",
  Type = ExtensionType.Core)]
public partial class NUnitFileLoggerAddin : IAddin

We’ll also kick off unit tests project with the very first test verifying whether our addin is discoverable. With FluentAssertions, it is as easy as:

public void NUnitFileLoggingAddin_IsDiscoverable()
    var addin = new NUnitFileLoggingAddin();

        a => a.Type == ExtensionType.Core);

2. Installation

Next, the addin must hook itself to NUnit’s extensions system via IAddin.Install method:

public bool Install(IExtensionHost host)
  var listeners = host.GetExtensionPoint("EventListeners");
  if (listeners == null)
    return false;

  return true;

This is to make sure we receive notifications when test-related event occurs.

3. EventListener interface

This interface offers notifications for various stages of test suite execution. The one that we want to hook to is TestFinished method. We’ll simply log time, result and test name. If a test fails, we also save an error message:

public void TestFinished(TestResult result)
    using (var file = File.Open("Log.txt", FileMode.Append))
    using (var writer = new StreamWriter(file))
        var message = string.Format("[{0:s}] [{1}] {2}", DateTime.Now,
            result.ResultState, result.Name);
        var isFailure =
            result.ResultState == ResultState.Error ||
            result.ResultState == ResultState.Failure;
        if (isFailure)

That’s all you need to log test results to custom file. Simply copy NUnitFileLoggingAddin class files to your test project and your tests will be logged to Log.txt file. However, we are far from done.

Design considerations

In its current form our addin is rather poor piece of software. We lack proper unit tests (File.Open and DateTime.Now sort of get in the way) and even changing log file name would require recompilation. This is no good.

Before we jump straight to refactoring let’s take a moment to think about possible improvements and extension points of our addin.

1. Code quality improvements

  • We should have unit tests for logging part. This requires abstracting file access and time.
  • NUnit will not allow us to inject abstracted dependencies via constructor arguments (addin instances are created via reflection). We need to find a way around it.
  • NUnit will not allow to have addin in separate assembly (it must be in the same one as our tests)1. We want to have majority of features in a base class, so that all is required is creating derived type in our test assembly.
  • (optional) Opening a file for writing with each test execution is not very efficient thing to do. We’d be better storing results and writing them all at once.

2. Extensibility

  • It would be good if we could change log file name/location.
  • …or log message format.
  • (optional) Instead of to a file, maybe we could write test results to a database or a web service.

Refactoring for testability and extensibility

Our first step would be to introduce abstractions over file system and time, IFileStreamFactory and ITimeProvider, respectively. Now, we also need to solve the problem with providing those abstractions. Since NUnit will create addin instance using reflection, there should be working parameterless constructor for our addin. Yet we also need constructor with parameters to pass mocked dependencies in unit test. What do we do? We use an anti-pattern – poor man’s DI:

public NUnitFileLoggingAddin()
    : this(new FileStreamFactory(), new TimeProvider())

public NUnitFileLoggingAddin(
    IFileStreamFactory fileStreamFactory,
    ITimeProvider timeProvider)
    this.fileStreamFactory = fileStreamFactory;
    this.timeProvider = timeProvider;

We’re good to write few tests for the logging part. As you might know from my previous posts, unit testing, IDisposable and Stream don’t play along very well. To test I/O interactions we will be using StreamRecorder class:

public void TestFinished_LogsSuccessfulTestNameAndTimestampToFile()
    var testResult = CreateTestResult("DummyTestName",
    var streamRecorder = new StreamRecorder();
    A.CallTo(() => fileStreamFactory.Create(A<string>._, A<FileMode>._))
    A.CallTo(() => timeProvider.Now())
        .Returns(10.May(2015).At(17, 35, 20));


        .StartWith("[2015-05-10T17:35:20] [Success] DummyTestName")

Test above simply verifies whether correct message is written to log file. We should add couple more tests for logging functionality before we proceed to extensibility refactoring. All unit tests written for NUnitFileLoggerAddin can be viewed at my GitHub repository.

Extension points

At this point our addin is fully usable. We might even use it to record its own tests - all we need is a local type inheriting from our base NUnitFileLoggerAddin class:

[NUnitAddin] public class LoggerAddin : NUnitFileLoggerAddin { }

This is a minor nuance given how we want to have our addin reusable, but luckily majority of the features can remain in base class.

Back to extension points. As I mentioned, we want to have control over the output formatting and log file path. To achieve this, our base NUnitFileLoggerAddin will expose several protected virtual members:

protected virtual string LogFilePath { get { return "Log.txt"; } }
protected virtual string CreatePassedTestMessage(TestResult result,
    DateTime currentTime)
protected virtual string CreateFailedTestMessage(TestResult result,
    DateTime currentTime)

Now, our LoggerAddin can for example change the way failed tests are reported:

public class LoggerAddin : NUnitFileLoggerAddin
    protected override string CreateFailedTestMessage(TestResult result,
        DateTime currentTime)
        return string.Format("{0} failed. Investigate!", result.Name);


Although we only did simple logging, available API offers much more in terms of extensibility. For example, similar mechanism can be used to write database integration testing API where NUnit will gather all tests marked with special database attribute, and before any of them is run, it will execute some code, for example creating database and inserting test data. We’ll explore these options in the next blog post.

It is also worth noting that upcoming NUnit 3.0 will replace the way how addins are implemented. Read about it at NUnit Addins wiki page and Addins replacement in (NUnit) Framework.

How to mock private method – solutions

January 25, 2015 | tags: unit-testing mocking c# design

I one of my previous blog entries I explained why it is not possible to mock private methods with tools like Moq or FakeItEasy. In this post I’ll show how you can overcome such situation1.

Problem: Origins

First, it is important to identify source of the problem. When you say 

I need to mock this private method

All I can hear is

This class is poorly designed. I must use some workarounds or even ugly hacks in order to test it. I know I shouldn’t be going down that road, but it seems I have no choice.

Properly designed SOLID code will never put you in a position which will require you to do private mocking. Source of the problem? Bad design.

To better understand my point let’s take a quick look at a class building order XML. Suppose you want to test CreateDocument method:

public class OrderDocumentFactory
    public XDocument CreateDocument(Order order)
        var document = new XDocument();
        document.Add(new XElement("Order",
            new XElement("Date", order.Date.ToShortDateString()),
            new XElement("OriginalValue", order.TotalValue),
            new XElement("Discount", CalculateDiscount(order))

        return document;

    private double CalculateDiscount(Order order)
        var promotionsFileName = string.Format("P_{0:ddMMyy}.TXT",
        var promotionsFileContent = File.ReadAllText(promotionsFileName);
        var baseDiscount = double.Parse(promotionsFileContent);
        var isLateProcessing = (DateTime.Now - order.Date).TotalDays > 3;
        var bonusDiscount = isLateProcessing ? 10.0 : 0.0;
        var totalDiscount = baseDiscount + bonusDiscount;

        return order.TotalValue * (100 - totalDiscount) / 100.0;

CreateDocument is trivial to test if we can mock this discount calculation thing. With all the file-loading, date-parsing and what-not, it simply gets in our way. This is a lot of stuff we do not really care about, we should mock it. But we cannot. What to do then?

Design considerations

Single responsibility principle, first of the SOLID principles states that:

Every class should have a single responsibility (a reason to change - JK), and that responsibility should be entirely encapsulated by the class

In other words, class should do one thing and one thing only. OrderDocumentFactory, what does it do? It builds XML. If our requirements evolved so that we need to return slightly different XML, that is our one reason to change the class in question. Building XML is our single responsibility.

Unfortunately, OrderDocumentFactory also has second, hidden responsibility – it calculates discount. Had we decided bonus discount is no longer supported, we’ll have to modify OrderDocumentFactory class. This is strange because why would document factory had anything to do with discount calculation? If we wanted to change discount calculation logic we should be rather looking to modify DiscountCalculator class. And there goes our problem – we have one class pretending to be two2.

SOLID solution

We already know what to do. We need a second class that will handle discount calculation logic. All right then, is this code unit testable now?

public XDocument CreateDocument(Order order)
    var document = new XDocument();
    var discount = new DiscountCalculator().CalculateDiscount(order);
    document.Add(new XElement("Order",
        new XElement("Date", order.Date.ToShortDateString()),
        new XElement("OriginalValue", order.TotalValue),
        new XElement("Discount", discount)

    return document;

Not so fast. This “solution” adds another responsibility to CreateDocument method – instance creation. We do not want that. What if DiscountCalculator required some constructor arguments? Where would CreateDocument take them from? Instance creation is almost always out of scope of any class responsibilities. We need dependency injection:

public class OrderDocumentFactory
    public OrderDocumentFactory(IDiscountCalculator discountCalculator)
        this.discountCalculator = discountCalculator;

    public XDocument CreateDocument(Order order)
        var document = new XDocument();
        var discount = discountCalculator.CalculateDiscount(order);
        document.Add(new XElement("Order",
            new XElement("Date", order.Date.ToShortDateString()),
            new XElement("OriginalValue", order.TotalValue),
            new XElement("Discount", discount)

      return document;

    private readonly IDiscountCalculator discountCalculator;

With code structured like this you don’t have to mock private method but instead you mock dependency, IDiscountCalculator, which is the usual course of action.


In most of the cases the need to mock private method originates from poor design. Design that is already causing problems (inability to write unit tests) and will cause more problems as the code grows. Solution is simple:

  • Fix your design by sticking to SOLID principles
  • Class should do one thing and one thing only; the fact that you need to mock private method of your class usually indicates there is a second class with different responsibility hiding there
  • Any additional responsibilities should be extracted as separate components and provided to class in question via dependency injection

Unit tests are much like your good buddy. When dealing with your code they will be first to ask “Wait a second, this cannot be right?”. Mocking private method cannot be right. And it is not.

  1. We have to make important assumption here - you can change problematic code.

  2. Private CreateDiscount method is a class in disguise.