jimmy keen

on .NET, C# and unit testing

Refactoring unit tests for readability

June 07, 2015 | tags: c# unit-testing refactoring

Any fool can write code that a computer can understand. Good programmers write code that humans can understand. - Martin Fowler, 2008

The code you produce will be read. The “brillant” class you have just written and forgotten might sit idle for many months. Then it breaks. It’s Friday evening and hotfix is needed right now because crucial invoicing procedure is generating weird, negative invoices for no reason!

Your poor colleague, Joe, is assigned to this dreaded task only to find your class. With elaborate algorithms solving simple tasks, because optimization1. With rarely used LINQ method. With neat, little code trick. With bit of “hackerish” list operations you had then learnt from some cool blog.

Sure, the implementation worked great. Unfortunately, not for Joe. He most likely understands nothing of it. Not because he has brain freeze or is slightly slower. It is because you had produced unreadable, unmaintainable ball of mud.

Why readability matters

Joe would have been just fine if you had kept your solution simple. Saved cool tricks and hacks for your pet projects at home. Didn’t optimize prematurely. Instead, focused on solving problem at hand and making sure your code is as simple to follow as possible. It is invaluable in situations like the one described above.

Now… how do you improve readability of your code? To discover that, we’ll take a look at the ExportTaskScheduler class, solving one specific business requirement – scheduling export:

public void ScheduleExportTask()
{
    var futureTask = new FutureTask
    {
        TargetIdentity = "ExportTask",
        CommandType = typeof(ExportCommand),
        PreparationDate = timeService.Now().AddSeconds(10),
    };
    var launcher = new Launcher
    {
        LaunchTime = timeService.Now().AddMinutes(10),
    };

    futureTaskScheduler.ScheduleFutureTask(futureTask, launcher);
}

That’s all there is. This code is fairly simple and readable. Method has well-defined, single responsibility (schedules export2) and is only few lines long. Anybody reading it should be able to tell what’s going on almost immediately. Readability goal has been achieved. Or… has it?

Let’s take a look at its unit test:

[Test]
public void TestScheduleFutureExportTask()
{
    var taskScheduler = A.Fake<IFutureTaskScheduler>();
    var timeService = A.Fake<ITimeService>();
    var scheduler = new ExportTaskScheduler(taskScheduler, timeService);
    A.CallTo(() => timeService.Now()).Returns(10.May(2015).At(13, 33));

    scheduler.ScheduleExportTask();

    A.CallTo(() => taskScheduler.ScheduleFutureTask(
            A<FutureTask>._, A<Launcher>._))
        .WhenArgumentsMatch(args =>
        {
            var futureTask = args.Get<FutureTask>(0);
            futureTask.CommandType.Should().Be(typeof(ExportCommand));
            futureTask.TargetIdentity.Should().Be("ExportTask");
            futureTask.PreparationDate.Should().Be(10.May(2015).At(13, 33, 10));

            var launcher = args.Get<Launcher>(1);
            launcher.LaunchTime.Should().Be(10.May(2015).At(13, 43));

            return true;
        })
        .MustHaveHappened(Repeated.Exactly.Once);
}

This code is certainly not as readable as implementation one. Why is that so? Unfortunately, unit tests are often treated as “second class citizen”, a code where regular practices and patterns don’t apply. This is a mistake. You will often find test methods without descriptive name, spanning multiple screen, weird mocks usage, cryptic variables naming – all developer’s careless attitude visible through code. Just as if unit test was some sort of different, less important code. It is not. It is production-level one, just as any other.

Without further due, let’s go over the issues one by one.

1. Test method name

The TestScheduleFutureExportTask tells us nothing. This is bad because method name is first thing we will see (especially when test fails) and it’s the name which should give us brief overview of testing scenario (so that we get to know what we are dealing with quickly). Using one of the popular naming conventions we arrive at something closer to ScheduleExportTask_SchedulesCorrectTask_10MinutesFromNow.

2. Variables names

Since we deal with two schedulers, high level one (scheduling business process) – ExportTaskScheduler, and low level one (scheduling code commands described by some objects) – FutureTaskScheduler, we should pay extra attention to how these two variables are named. The current choice (taskScheduler and scheduler) is rather poor as it is harder to discover which is which. Rule of thumb is to keep the variables names long, without any “smart” abbreviations. In the long run, exportTaskScheduler is an immediate tell, while scheduler, taskScheduler, tScheduler or exTskSchd are not.

3. Dealing with time

We use FluentAssertions syntax to neatly create DateTime instances but truth to be told, the exact value is irrelevant in this test. We need some point in time because we assert how other values relate to this given point (task is scheduled 10 minutes from now). We are better of with hiding exact value, for example in static field:

private static readonly DateTime Now = 10.May(2015).At(13, 33);

Then, our asserts are much more verbose:

launcher.LaunchTime.Should().Be(Now.AddMinutes(10));

4. Asserting values

Since this is the rare test of verifying whether some other component was called (with correct parameters) and we are already doing asserts inside WhenArgumentsMatch the simplicity has taken a hit. What we could do is extract the assertions to separate methods. Our test now looks like this:

[Test]
public void ScheduleExportTask_SchedulesCorrectTask_10MinutesFromNow()
{
  var futureTaskScheduler = A.Fake<IFutureTaskScheduler>();
  var timeService = A.Fake<ITimeService>();
  var exportTaskScheduler = new ExportTaskScheduler(futureTaskScheduler,
      timeService);
  A.CallTo(() => timeService.Now()).Returns(Now);

  exportTaskScheduler.ScheduleExportTask();

  A.CallTo(() => futureTaskScheduler.ScheduleFutureTask(
          A<FutureTask>._, A<Launcher>._))
      .WhenArgumentsMatch(args =>
          IsFutureTaskCreatedCorrectly(args) &&
          IsLauncherStarting10MinutesFromNow(args))
      .MustHaveHappened(Repeated.Exactly.Once);
}

It is good as it is but there are few more things we could do. They might not be as beneficial in this simple case but as the number of test cases grows, they will make a difference.

5. Extracting instances creation

For any test case, the way system/class under test instance is created is most likely irrelevant. What matters is easy access to such instance and clear, intuitive name (and we’ve already dealt with that). To put that noise away, we can move such preparation steps elsewhere. For example, to NUnit’s SetUp method where each instance is created and stored in class field:

[SetUp]
public void InitializeComponents()
{
    futureTaskScheduler = A.Fake<IFutureTaskScheduler>();
    timeService = A.Fake<ITimeService>();
    exportTaskScheduler = new ExportTaskScheduler(futureTaskScheduler,
        timeService);
}

This way, the arrange part tells that current time is this and this, which is the only relevant information.

6. Verbose arrange

…which could be improved further. How? By wrapping it in a more verbose method, like:

SetCurrentTimeTo(Now);
SetCurrentTimeToNow();
CurrentTimeIs(Now);

The more test cases we got, the more beneficial such verbose arranges get. Similar thing can be done to assert part but since it will be different for different tests, there is little gain (we won’t be able to reuse such verbose assert). The final version of unit test might look like this:

[Test]
public void ScheduleExportTask_SchedulesCorrectTask_10MinutesFromNow()
{
  SetCurrentTimeTo(Now);

  exportTaskScheduler.ScheduleExportTask();

  A.CallTo(() => futureTaskScheduler.ScheduleFutureTask(
          A<FutureTask>._, A<Launcher>._))
      .WhenArgumentsMatch(args =>
          IsFutureTaskCreatedCorrectly(args) &&
          IsLauncherStarting10MinutesFromNow(args))
      .MustHaveHappened(Repeated.Exactly.Once);
}

Conclusion

Readability is important. Simple, easy to understand code is important. Somebody is going to read our work one day, maybe in a big hurry. We don’t want to make their work harder.

To learn more about practices similar to the ones presented in this post, I recommend Robert C. Martin book, Clean Code, and his training videos based on the book (these can be found online).

  1. Premature, of course.

  2. One might argue that there’s hidden responsibility in creation of objects (FutureTask and Launcher). However, these are DTO-types and we should look at them just as we would look at string, int and similar types.

Getting started with JavaScript unit testing – Node, Jasmine, Karma and TDD on Windows

March 12, 2015 | tags: unit-testing javascript karma jasmine node.js windows

Backstory

I’m working on a .NET regex tutorial and I thought it would be nice to have sort of interactive-try-it-yourself examples embedded within blog post. This sounds like a job for JavaScript, right? Simple enough. The only issue is, my JavaScript knowledge and experience are virtually non-existent. What do I do? I’ll start with a test!

JavaScript and unit testing

What are the unit testing options when JavaScript is concerned? To start we need two things - test runner and assertion library. This StackOverflow question provides decent overview on what’s available. It turns out all we need is Jasmine which is both test runner and BDD framework, supporting BDD-style of writing tests (or rather specs).

Installing Jasmine on Windows

  1. Download and install node.js (it comes as standard Windows .msi installer )
  2. Once it’s done, type the following in the command line to see whether node’s package manager (npm) was successfully installed (we’ll use npm to download further modules):

> npm --version

2.5.1

Now we only need few more modules: Yeoman, Bower and Generator-Jasmine. Type following in console:

> npm install -g yo

> npm install -g bower

> npm install -g generator-jasmine

The -g switch tells npm to install packages in node’s global modules directory (rather than locally within your project’s directories).

To finalize testing environment setup, we need to scaffold Jasmine’s tests directory. To do that, we’ll navigate to project directory and use Yeoman’s yo tool:

> yo jasmine

This will create test directory with index.html and spec/test.js files, which will be of your primary interest.

Running first test

The index.html is Jasmine’s test runner – open it in browser and your tests will run. “How? What tests?” you might ask. Let’s take a quick look and index.html:

<!-- include source files here... -->

<!-- include spec files here... -->
<script src="spec/test.js"></script>

We simply need to reference our implementation and test files:

<!-- include source files here... -->
<script src="../regex-highlighter.js"></script>

<!-- include spec files here... -->
<script src="spec/regex-highlighter-tests.js"></script>

What’s next? First test, obviously. Since this is super-fresh environment our first test for highlightMatches function is going to be trivial, requiring the implementation to only return value:

'use strict';

(function () {
    describe('highlightMatches', function () {
        it('should return "Success"', function () {
          expect(highlightMatches('x', 'y')).toBe('Success');
        });
    });
})();

Explanation of Jasmine’s methods and BDD-style can be found at Jasmine Introduction page. Without further due, we add equally simple implementation of highlightMatcher function, refresh index.html and Jasmine is happy to announce that our first JavaScript test is very successful one:

First successful JavaScript test with Jasmine

Introducing Karma

Our current setup is up and working and we might just as well be done here. But there is one more thing that will help us greatly when developing JavaScript code – Karma. It is a test runner which watches over our files and runs all tests whenever we make any changes in source files. Perfect match for TDD/BDD environment! You can view introductory video at Youtube (14:51) (don’t get confused – tutorial talks about Testacular, which was Karma’s original name while ago).

To get it we need to execute the following (the karma-cli is Karma’s command line interface module):

> npm install -g karma

> npm install -g karma-cli

Next, navigate to project directory and initialize configuration. Karma will “ask” few simple questions and basing on your answers it will generate config file (js.config.js):

> karma init jk.config.js

Which testing framework do you want to use ?

Press tab to list possible options. Enter to move to the next question.

> jasmine

...

What is the location of your source and test files ?

You can use glob patterns, eg. "js/*.js" or "test/**/*Spec.js".

Enter empty string to move to the next question.

> ../regex-highlighter.js

> spec/regex-highlighter-tests.js

>

Configuration is ready. All that’s left to do is simply run Karma, passing configuration file name as argument:

> karma start jk.config.js

Everything should be find and we’ll be greeted with message similar to the one below:

Successful Karma setup

Modifying test to make it fail will get noticed immediately:

Failing test

Summary

To get started with JavaScript unit testing you need to:

  1. Install node.js
  2. (optional) Install Jasmine, Yeoman and Bower npm install -g yo bower generator-jasmine (this trio isn’t needed when you use Karma – Karma will take care of dependencies on its own)
  3. (optional) Scaffold Jasmine test directory yo jasmine
  4. (optional) Run first test by opening Jasmine’s index.html
  5. Install Karma npm install -g karma karma-cli
  6. Configure Karma karma init <config>
  7. Start Karma karma start <config>

Logging test results with NUnit

February 28, 2015 | tags: unit-testing nunit extensions logging design

Recently, a question popped at StackOverflow asking what needed to be done in order to custom-log unit test failures. Not many people know that, but NUnit offers extensions API which could be utilized to solve this very problem. In this post, we’ll see how.

NUnit Addins API

To extend NUnit we need to implement an addin listening to events NUnit triggers during different stages of tests execution. Our response to such events (preferably test finished event) will be logging some data to a file. As simple as that. Let’s see what do we got:

  • IAddin interface & NUnitAddinAttribute – these two will be used to “introduce” our addin to NUnit and make sure it is loaded and present during tests execution
  • EventListener – this interface (yes, an interface) will be our primary implementation doing actual logging when some test-related event occurs

All the components we need are available as NUnit.AddinsDependencies package, available on NuGet.

NUnitFileLoggerAddin

1. Detection

In order for NUnit to detect our addin we need to mark class implementing it with NUnitAddinAttribute and implement IAddin interface:

[NUnitAddin(
  Name = "File Logger",
  Description = "Writes test result to file",
  Type = ExtensionType.Core)]
public partial class NUnitFileLoggerAddin : IAddin

We’ll also kick off unit tests project with the very first test verifying whether our addin is discoverable. With FluentAssertions, it is as easy as:

[Test]
public void NUnitFileLoggingAddin_IsDiscoverable()
{
    var addin = new NUnitFileLoggingAddin();

    addin.Should().BeAssignableTo<IAddin>();
    addin.GetType().Should().BeDecoratedWith<NUnitAddinAttribute>(
        a => a.Type == ExtensionType.Core);
}

2. Installation

Next, the addin must hook itself to NUnit’s extensions system via IAddin.Install method:

public bool Install(IExtensionHost host)
{
  var listeners = host.GetExtensionPoint("EventListeners");
  if (listeners == null)
    return false;

  listeners.Install(this);
  return true;
}

This is to make sure we receive notifications when test-related event occurs.

3. EventListener interface

This interface offers notifications for various stages of test suite execution. The one that we want to hook to is TestFinished method. We’ll simply log time, result and test name. If a test fails, we also save an error message:

public void TestFinished(TestResult result)
{
    using (var file = File.Open("Log.txt", FileMode.Append))
    using (var writer = new StreamWriter(file))
    {
        var message = string.Format("[{0:s}] [{1}] {2}", DateTime.Now,
            result.ResultState, result.Name);
        writer.WriteLine(message);
        var isFailure =
            result.ResultState == ResultState.Error ||
            result.ResultState == ResultState.Failure;
        if (isFailure)
        {
            writer.WriteLine(result.Message);
        }
    }
}

That’s all you need to log test results to custom file. Simply copy NUnitFileLoggingAddin class files to your test project and your tests will be logged to Log.txt file. However, we are far from done.

Design considerations

In its current form our addin is rather poor piece of software. We lack proper unit tests (File.Open and DateTime.Now sort of get in the way) and even changing log file name would require recompilation. This is no good.

Before we jump straight to refactoring let’s take a moment to think about possible improvements and extension points of our addin.

1. Code quality improvements

  • We should have unit tests for logging part. This requires abstracting file access and time.
  • NUnit will not allow us to inject abstracted dependencies via constructor arguments (addin instances are created via reflection). We need to find a way around it.
  • NUnit will not allow to have addin in separate assembly (it must be in the same one as our tests)1. We want to have majority of features in a base class, so that all is required is creating derived type in our test assembly.
  • (optional) Opening a file for writing with each test execution is not very efficient thing to do. We’d be better storing results and writing them all at once.

2. Extensibility

  • It would be good if we could change log file name/location.
  • …or log message format.
  • (optional) Instead of to a file, maybe we could write test results to a database or a web service.

Refactoring for testability and extensibility

Our first step would be to introduce abstractions over file system and time, IFileStreamFactory and ITimeProvider, respectively. Now, we also need to solve the problem with providing those abstractions. Since NUnit will create addin instance using reflection, there should be working parameterless constructor for our addin. Yet we also need constructor with parameters to pass mocked dependencies in unit test. What do we do? We use an anti-pattern – poor man’s DI:

public NUnitFileLoggingAddin()
    : this(new FileStreamFactory(), new TimeProvider())
{
}

public NUnitFileLoggingAddin(
    IFileStreamFactory fileStreamFactory,
    ITimeProvider timeProvider)
{
    this.fileStreamFactory = fileStreamFactory;
    this.timeProvider = timeProvider;
}

We’re good to write few tests for the logging part. As you might know from my previous posts, unit testing, IDisposable and Stream don’t play along very well. To test I/O interactions we will be using StreamRecorder class:

[Test]
public void TestFinished_LogsSuccessfulTestNameAndTimestampToFile()
{
    var testResult = CreateTestResult("DummyTestName",
        ResultState.Success);
    var streamRecorder = new StreamRecorder();
    A.CallTo(() => fileStreamFactory.Create(A<string>._, A<FileMode>._))
        .Returns(streamRecorder.UnderlyingStream);
    A.CallTo(() => timeProvider.Now())
        .Returns(10.May(2015).At(17, 35, 20));

    addin.TestFinished(testResult);

    streamRecorder.WrittenContent.Should()
        .StartWith("[2015-05-10T17:35:20] [Success] DummyTestName")
}

Test above simply verifies whether correct message is written to log file. We should add couple more tests for logging functionality before we proceed to extensibility refactoring. All unit tests written for NUnitFileLoggerAddin can be viewed at my GitHub repository.

Extension points

At this point our addin is fully usable. We might even use it to record its own tests - all we need is a local type inheriting from our base NUnitFileLoggerAddin class:

[NUnitAddin] public class LoggerAddin : NUnitFileLoggerAddin { }

This is a minor nuance given how we want to have our addin reusable, but luckily majority of the features can remain in base class.

Back to extension points. As I mentioned, we want to have control over the output formatting and log file path. To achieve this, our base NUnitFileLoggerAddin will expose several protected virtual members:

protected virtual string LogFilePath { get { return "Log.txt"; } }
protected virtual string CreatePassedTestMessage(TestResult result,
    DateTime currentTime)
protected virtual string CreateFailedTestMessage(TestResult result,
    DateTime currentTime)

Now, our LoggerAddin can for example change the way failed tests are reported:

[NUnitAddin]
public class LoggerAddin : NUnitFileLoggerAddin
{
    protected override string CreateFailedTestMessage(TestResult result,
        DateTime currentTime)
    {
        return string.Format("{0} failed. Investigate!", result.Name);
    }
}

Conclusion

Although we only did simple logging, available API offers much more in terms of extensibility. For example, similar mechanism can be used to write database integration testing API where NUnit will gather all tests marked with special database attribute, and before any of them is run, it will execute some code, for example creating database and inserting test data. We’ll explore these options in the next blog post.

It is also worth noting that upcoming NUnit 3.0 will replace the way how addins are implemented. Read about it at NUnit Addins wiki page and Addins replacement in (NUnit) Framework.