After initial excitement about xUnit and how cool your tests become with AutoDataAttribute. My itch to convert all my tests into xUnit have died down. And I’m not excited as much. XUnit is certainly not the silver bullet.
I do agree with certain things that are in xUnit, like not providing
[TearDown], rather have constructor and destructor, but can’t agree with other things. And the authors of xUnit have not been listening to their community which is unfortunate. I’ll list the annoyances I discovered in the order I faced them.
1. Unable to provide a message on failed assertion
In NUnit you could say
Assert.IsFalse(true, "Yep, true is not false"). In every assertion you could’ve added a piece of text to the failed assertion. I have used that in the past. Not in many tests, but in few, enough to notice when I decided to convert these tests to xUnit. In xUnit you can’t do that. Apart from
Assert.False() where you can provide a message.
This feature have been removed by authors and it is not coming back. The reason for that is “If you need messages because you can’t understand the test code otherwise, then that’s the problem (and not the lack of a message parameter)“(comments from Brad Wilson). I do agree with the part of smelly test. But this is not the only reason to use the messages – there are cases when this message represents the error.
See my test that uses reflection to go through all EF models to verify a presence of empty constructor. In that test I build a list of classes that fail the condition and in the end if this list is not empty, I throw assertion exception with string of comma-separated types that fail condition. This is not your conventional unit test, but still a valid test. With NUnit the assertion looks like this:
var finalMessage = String.Join(", ", errors);
As part of the message I get on a test fail, I get a list of classes that fail my test. There is no other way to pass a message to developer about failed test.
If I move this to xUnit, assertion would look like this:
var finalMessage = String.Join(", ", errors);
Now this is a code smell. It is not immediately clear what we are asserting. Maybe I’m just picky, but I’m not very happy about these lines. I can predict that Brad Wilson would suggest to write an extension for
[InlineData] attribute that provides list of Types to be tested. Yep, this sounds like a reasonable idea. But I could not find any documentation on how to do that.
2. Lack of documentation.
The previous point slowly leads to this one. I could not find any apprehensive document/site that has documented features of xUnit. Documentation for xUnit looks like this at the moment:
There is a bunch of outdated articles linked from the home page. There is another bunch of blog posts which you need to find first. But there is nothing central anywhere. Unlike NUnit amazing documentation. In this sense NUnit beats many projects, even commercial ones, so can’t really compare.
To find out some specifics of operations I needed to troll through xUnit source code, only to find out that I’m looking on source code v2, where I was using 1.9.x. I was determined enough to get the source and find parts I needed. But less experienced developers would not do this and will struggle.
3. Unable to have “Manual execution only” tests
You can ignore NUnit tests with
[Ignore] attribute. I would like to have a reason as a mandatory parameter in this attribute, but this is secondary. Ignored tests will be skipped in test runners, but you can specifically run this one ignored test manually. This is useful when you write exploratory tests – where you just trying things out and they are not really a test, but more like a piece of code in your system which you can execute separately from the whole thing. Or if these tests are integration tests interacting with external API, where you need to manually undo effects of the executed test. And I’m not the only one who uses this practice.
In xUnit you ignore tests like this:
[Fact(Skip="run only manually")]. Only you can’t run them at all! Not even manually. And people want that! Jimmy Bogard restores test database with a manual execution of ignored test.. He came up with idea where skipped tests do run only when debugger is attached. Not a bad idea, also other people done the same.
Running these tests in debugger kinda work, but looks like a cludge to me. Why not allow for manual execution and do not multiply the hacks?
4. Unable to filter out tests by categories
[Category()] attribute that marks the test(s) with some category. Usually these categories mark tests as “fast”, “slow”, “database”, “integration”, “smoke”, etc. So when you run these tests you can only include (or exclude) tests suitable for the environment. xUnit has
Trait attribute which is a pair of strings. To create a category with Trait you’ll have to do this:
[Trait("Category", "database")]. The key-value structure gives a bit more flexibility with categories. But I can’t come up with a scenario where I’d use other than “Category” for the trait key. Also, code examples shipped with xUnit 1.9.2 do have
[Category("")] attribute which is inherited from
Trait and placing the key to be
"Category". But in xUnit v2 (which is in alpha just now)
Trait attribute is sealed, so you can’t inherit it any more.
We do run our tests in TeamCity and way to execute xUnit is through MSBuild script. The problem is here: I can’t filter out tests by their
Trait attribute when executing through MSBuild. And this is not an oversight. This is intentional. The idea behind this is “…place different test types into different assemblies, rather than use traits for the filtering.“. Excellent idea, I say! Let’s have a bunch of test assemblies to make Visual Studio even slower. A few months back I’ve merged a million (OK, there was 6) of our test assemblies into one project to speed up VS and reduce maintenance. Now let’s revert this and create a few extra test assemblies to filter out by test types.
See this scenario: In one of my projects I have integration tests that need database. Also I have integration tests that use Azure Emulator. On the build server in one step I execute all non-integrtion tests first, then if none fails, in the next step I re-create database and further build step executes database tests. See this video for reasons behind it. For these build steps I first need to filter out database tests, then to include only database tests.
Because I use hosted build server I can’t run Azure emulator on the build server and all my tests using storage emulator will fail without it, so I need to filter them out.
According to xUnit authors, these tests should live in 2 separate assemblies. I have about 10 db-tests and about 20 for Azure emulator tests. This is 2 extra assemblies with very small number of tests. Good practice? I don’t think so! Only encourages to make a mess – I’ve been there, I did not like it. Every separate test project in your solution doubles maintenance burden.
What about people who separate tests by being “fast” and “slow”. And execute fast first, slow later on their build server. Or ignore slow tests on CI build, run them only on nightly builds. There is no clear distinction between the tests, unlike in my example. And within the same class you can have slow tests next to fast ones, all testing the same SUT. How do you propose to work this one out? Throw tests from one assembly to another when they become slow? Now here is some serious mess waiting to be happen.
And if you can’t filter by Traits on build server, what is the point of them? Gui runners usually allow you to choose what tests you’d like to run. And on very rare occasion I filter out tests by their category in GUI.
I know, you can filter out by traits with console runner. I could not make it to work -(. Also this sounds a bit hypocritical to me: authors allow to filter out traits in console runner, but not in MSBuild runner, because MSBuild for automatic test execution.
While I enjoyed writing tests with AutoData attribute from Autofixture, I can’t really say xUnit solved my issues with test execution. There is a big possibility that I don’t understand a lot of concepts behind this framework, but there is no good place to go for an explanation. And if somebody has answers to my moans, please feel free to speak out in comments! I’d love to hear you proving me wrong. Because I hope I’m wrong here.
So far xUnit been a disappointment in my experience with a lot of hype around it. I’ll keep it for the cases where I’ll benefit from
AutoDataAttribute, but all other tests will be based on NUnit.