This post is an extended version of my answer to Stackoverflow question where user asks what pieces of code should you really unit test.
The grand confusion
Every now and then Stackoverflow spawns a question from person looking for clarification when should one write unit tests. You have probably heard advises from both ends of spectrum varying from (unit) test everything to don't bother with (unit) testing at all.
Let's take a quick look at TDD, popular software development technique, which utilizes unit testing heavily at its core. One of its prime rules states:
Write new code only if an automated test has failed1
This, combined with general understanding that the more ground your tests cover, the better, quickly leads to a conclusion that you simply should have a test for every single line of code2, even among people who are not doing TDD at all (nor any other test-first approach). While this advise might be sound on paper, in practice it's almost never worth following as an ultimate guideline. Why is that so?
We need to have it done. Due yesterday.
Software development, as any other business, is driven by money. Somebody needs something and is willing to pay for having it done. Costs are initially estimated only to be quickly rejected by the customer. Then, it turns out it can be done faster, better and cheaper. Few feedback-loops like that, and there's rough estimation acceptable by the customer. Unfortunately, the calculations are most likely wrong. Why? Because software development estimation is hard3, and the faster, better and cheaper is usually unattainable.
Cheap and fast are commonly chosen ones. This naturally has negative impact on quality (either application quality as in "we could have done it better", but more often than not, code quality too). Shouldn't be a surprise as people are reluctant to spend more money once the product is good enough (and nobody [business-wise] really cares whether your code is good or not).
You might be wondering what does it all have to do with unit testing. This shouldn't have any influence on you writing code, right? Wrong.
We need it all, we need it now.
The fast and cheap will often result in similar conversations between you and your project manager:
Manager: done with your task?
You: it's almost ready, I just need to clean up code and refactor few things. Maybe add a test or two.
Manager: Never mind that. Commit your changes, you'll finish the clean up some other time.
You: you sure? I could make that code bit better, you know.
Manager: no no, we're releasing new version. Just commit what you got.
Of course, you should know that some other time is a at some point in time approaching infinity in disguise. Which simply means never. Those are business/money constraints that are directly affecting you. Task needs to be done and that's all. If you didn't manage to complete the perfect version of task in time, you'll have to go with less perfect version (aka good enough). More often than not, you'll realize you simply have to settle with putting some view code in view models, having singleton here and there or skipping few tests. You know how to do certain things better, but deadlines don't like to wait.
Back to tests. Given you already have limited time, can you really spend it writing tests? No doubt. But only those that yield some value. What the yield some value mean, exactly? Kent Beck states that tests should exercise one of the following:
What about constructor code that does simple assignments? Or auto-properties? Or wrapper classes that delegate calls to other components (that falls into operations part). In most cases, general advise is to not test that code. Primarily for the reasons already stated:
- it takes time (which is limiting factor)
- it produces extra code to maintain
- it brings little value to the project (how beneficial test for auto-property really is?)
Test "value" uncertainty
How do you determine which tests bring value and which don't? Unfortunately, there's no universal answer to this question and you just know basing on your prior experience. Most of us can tell difference in complexity between plain assignments constructor code and SQL parser code. How about SQL parser and config file builder? Both are non-trivial tasks, which is more difficult? Do you write tests for one and not the other? What about common MVVM pattern,
INotifyPropertyChanged implementation? Do you write test for property that checks condition and performs operation (raises event). Those are two points from Kent Beck's list, but at the same time it's such trivial and boilerplate code... should you bother?
Kent Beck answered similar question already:
I get paid for code that works, not for tests, so my philosophy is to test as little as possible to reach a given level of confidence (...). If I don't typically make a kind of mistake (...), I don't test for it. I do tend to make sense of test errors, so I'm extra careful when I have logic with complicated conditionals. When coding on a team, I modify my strategy to carefully test code that we, collectively, tend to get wrong.
Which once again narrows down to personal experience. For example, I hardly ever test aforementioned
INotifyPropertyChanged, because I usually can quickly relate something displaying incorrectly in UI to XAML bindining check and
PropertyChanged event raising check. And as a result, those tests are usually of no value to me. On the other hand, if particular property is a constant troublemaker, I do have test for it. For others the strategies might be different. YMMV.
I find Kent's advise highly pragmatic - you always need to keep adjusting your code writing/testing habits to your team, your current level of knowledge, tools and what not more. There is no single golden rule or universal guide.