Any fool can write code that a computer can understand. Good programmers write code that humans can understand.
- Martin Fowler, 2008
The code you produce will be read. The “brillant” class you have just written and forgotten might sit idle for many months. Then it breaks. It’s Friday evening and hotfix is needed right now because crucial invoicing procedure is generating weird, negative invoices for no reason!
Your poor colleague, Joe, is assigned to this dreaded task only to find your class. With elaborate algorithms solving simple tasks, because optimization1. With rarely used LINQ method. With neat, little code trick. With bit of “hackerish” list operations you had then learnt from some cool blog.
Sure, the implementation worked great. Unfortunately, not for Joe. He most likely understands nothing of it. Not because he has brain freeze or is slightly slower. It is because you had produced unreadable, unmaintainable ball of mud.
Why readability matters
Joe would have been just fine if you had kept your solution simple. Saved cool tricks and hacks for your pet projects at home. Didn’t optimize prematurely. Instead, focused on solving problem at hand and making sure your code is as simple to follow as possible. It is invaluable in situations like the one described above.
Now… how do you improve readability of your code? To discover that, we’ll take a look at the ExportTaskScheduler class, solving one specific business requirement – scheduling export:
That’s all there is. This code is fairly simple and readable. Method has well-defined, single responsibility (schedules export2) and is only few lines long. Anybody reading it should be able to tell what’s going on almost immediately. Readability goal has been achieved. Or… has it?
Let’s take a look at its unit test:
This code is certainly not as readable as implementation one. Why is that so? Unfortunately, unit tests are often treated as “second class citizen”, a code where regular practices and patterns don’t apply. This is a mistake. You will often find test methods without descriptive name, spanning multiple screen, weird mocks usage, cryptic variables naming – all developer’s careless attitude visible through code. Just as if unit test was some sort of different, less important code. It is not. It is production-level one, just as any other.
Without further due, let’s go over the issues one by one.
1. Test method name
The TestScheduleFutureExportTask tells us nothing. This is bad because method name is first thing we will see (especially when test fails) and it’s the name which should give us brief overview of testing scenario (so that we get to know what we are dealing with quickly). Using one of the popular naming conventions we arrive at something closer to ScheduleExportTask_SchedulesCorrectTask_10MinutesFromNow.
2. Variables names
Since we deal with two schedulers, high level one (scheduling business process) – ExportTaskScheduler, and low level one (scheduling code commands described by some objects) – FutureTaskScheduler, we should pay extra attention to how these two variables are named. The current choice (taskScheduler and scheduler) is rather poor as it is harder to discover which is which. Rule of thumb is to keep the variables names long, without any “smart” abbreviations. In the long run, exportTaskScheduler is an immediate tell, while scheduler, taskScheduler, tScheduler or exTskSchd are not.
3. Dealing with time
We use FluentAssertions syntax to neatly create DateTime instances but truth to be told, the exact value is irrelevant in this test. We need some point in time because we assert how other values relate to this given point (task is scheduled 10 minutes from now). We are better of with hiding exact value, for example in static field:
Then, our asserts are much more verbose:
4. Asserting values
Since this is the rare test of verifying whether some other component was called (with correct parameters) and we are already doing asserts inside WhenArgumentsMatch the simplicity has taken a hit. What we could do is extract the assertions to separate methods. Our test now looks like this:
It is good as it is but there are few more things we could do. They might not be as beneficial in this simple case but as the number of test cases grows, they will make a difference.
5. Extracting instances creation
For any test case, the way system/class under test instance is created is most likely irrelevant. What matters is easy access to such instance and clear, intuitive name (and we’ve already dealt with that). To put that noise away, we can move such preparation steps elsewhere. For example, to NUnit’s SetUp method where each instance is created and stored in class field:
This way, the arrange part tells that current time is this and this, which is the only relevant information.
6. Verbose arrange
…which could be improved further. How? By wrapping it in a more verbose method, like:
The more test cases we got, the more beneficial such verbose arranges get. Similar thing can be done to assert part but since it will be different for different tests, there is little gain (we won’t be able to reuse such verbose assert). The final version of unit test might look like this:
Readability is important. Simple, easy to understand code is important. Somebody is going to read our work one day, maybe in a big hurry. We don’t want to make their work harder.
To learn more about practices similar to the ones presented in this post, I recommend Robert C. Martin book, Clean Code, and his training videos based on the book (these can be found online).
One might argue that there’s hidden responsibility in creation of objects (FutureTask and Launcher). However, these are DTO-types and we should look at them just as we would look at string, int and similar types. ↩
Installing Jasmine on Windows
Download and install node.js (it comes as standard Windows .msi installer )
Once it’s done, type the following in the command line to see whether node’s package manager (npm) was successfully installed (we’ll use npm to download further modules):
The -g switch tells npm to install packages in node’s global modules directory (rather than locally within your project’s directories).
To finalize testing environment setup, we need to scaffold Jasmine’s tests directory. To do that, we’ll navigate to project directory and use Yeoman’s yo tool:
> yo jasmine
This will create test directory with index.html and spec/test.js files, which will be of your primary interest.
Running first test
The index.html is Jasmine’s test runner – open it in browser and your tests will run. “How? What tests?” you might ask. Let’s take a quick look and index.html:
We simply need to reference our implementation and test files:
What’s next? First test, obviously. Since this is super-fresh environment our first test for highlightMatches function is going to be trivial, requiring the implementation to only return value:
To get it we need to execute the following (the karma-cli is Karma’s command line interface module):
> npm install -g karma
> npm install -g karma-cli
Next, navigate to project directory and initialize configuration. Karma will “ask” few simple questions and basing on your answers it will generate config file (js.config.js):
> karma init jk.config.js
Which testing framework do you want to use ?
Press tab to list possible options. Enter to move to the next question.
What is the location of your source and test files ?
You can use glob patterns, eg. "js/*.js" or "test/**/*Spec.js".
Enter empty string to move to the next question.
Configuration is ready. All that’s left to do is simply run Karma, passing configuration file name as argument:
> karma start jk.config.js
Everything should be find and we’ll be greeted with message similar to the one below:
Modifying test to make it fail will get noticed immediately:
Recently, a question popped at StackOverflow asking what needed to be done in order to custom-log unit test failures. Not many people know that, but NUnit offers extensions API which could be utilized to solve this very problem. In this post, we’ll see how.
NUnit Addins API
To extend NUnit we need to implement an addin listening to events NUnit triggers during different stages of tests execution. Our response to such events (preferably test finished event) will be logging some data to a file. As simple as that. Let’s see what do we got:
In order for NUnit to detect our addin we need to mark class implementing it with NUnitAddinAttribute and implement IAddin interface:
We’ll also kick off unit tests project with the very first test verifying whether our addin is discoverable. With FluentAssertions, it is as easy as:
Next, the addin must hook itself to NUnit’s extensions system via IAddin.Install method:
This is to make sure we receive notifications when test-related event occurs.
3. EventListener interface
This interface offers notifications for various stages of test suite execution. The one that we want to hook to is TestFinished method. We’ll simply log time, result and test name. If a test fails, we also save an error message:
That’s all you need to log test results to custom file. Simply copy NUnitFileLoggingAddin class files to your test project and your tests will be logged to Log.txt file. However, we are far from done.
In its current form our addin is rather poor piece of software. We lack proper unit tests (File.Open and DateTime.Now sort of get in the way) and even changing log file name would require recompilation. This is no good.
Before we jump straight to refactoring let’s take a moment to think about possible improvements and extension points of our addin.
1. Code quality improvements
We should have unit tests for logging part. This requires abstracting file access and time.
NUnit will not allow us to inject abstracted dependencies via constructor arguments (addin instances are created via reflection). We need to find a way around it.
NUnit will not allow to have addin in separate assembly (it must be in the same one as our tests)1. We want to have majority of features in a base class, so that all is required is creating derived type in our test assembly.
(optional) Opening a file for writing with each test execution is not very efficient thing to do. We’d be better storing results and writing them all at once.
It would be good if we could change log file name/location.
…or log message format.
(optional) Instead of to a file, maybe we could write test results to a database or a web service.
Refactoring for testability and extensibility
Our first step would be to introduce abstractions over file system and time, IFileStreamFactory and ITimeProvider, respectively. Now, we also need to solve the problem with providing those abstractions. Since NUnit will create addin instance using reflection, there should be working parameterless constructor for our addin. Yet we also need constructor with parameters to pass mocked dependencies in unit test. What do we do? We use an anti-pattern – poor man’s DI:
We’re good to write few tests for the logging part. As you might know from my previous posts, unit testing, IDisposable and Stream don’t play along very well. To test I/O interactions we will be using StreamRecorder class:
Test above simply verifies whether correct message is written to log file. We should add couple more tests for logging functionality before we proceed to extensibility refactoring. All unit tests written for NUnitFileLoggerAddin can be viewed at my GitHub repository.
At this point our addin is fully usable. We might even use it to record its own tests - all we need is a local type inheriting from our base NUnitFileLoggerAddin class:
This is a minor nuance given how we want to have our addin reusable, but luckily majority of the features can remain in base class.
Back to extension points. As I mentioned, we want to have control over the output formatting and log file path. To achieve this, our base NUnitFileLoggerAddin will expose several protected virtual members:
Now, our LoggerAddin can for example change the way failed tests are reported:
Although we only did simple logging, available API offers much more in terms of extensibility. For example, similar mechanism can be used to write database integration testing API where NUnit will gather all tests marked with special database attribute, and before any of them is run, it will execute some code, for example creating database and inserting test data. We’ll explore these options in the next blog post.