After initial excitement about xUnit and how cool your tests become with AutoDataAttribute. My itch to convert all my tests into xUnit have died down. And I’m not excited as much. XUnit is certainly not the silver bullet.

I do agree with certain things that are in xUnit, like not providing [SetUp] and [TearDown], rather have constructor and destructor, but can’t agree with other things. And the authors of xUnit have not been listening to their community which is unfortunate. I’ll list the annoyances I discovered in the order I faced them.

1. Unable to provide a message on failed assertion

In NUnit you could say Assert.IsFalse(true, "Yep, true is not false"). In every assertion you could’ve added a piece of text to the failed assertion. I have used that in the past. Not in many tests, but in few, enough to notice when I decided to convert these tests to xUnit. In xUnit you can’t do that. Apart from Assert.True() and Assert.False() where you can provide a message.

This feature have been removed by authors and it is not coming back. The reason for that is “If you need messages because you can’t understand the test code otherwise, then that’s the problem (and not the lack of a message parameter)“(comments from Brad Wilson). I do agree with the part of smelly test. But this is not the only reason to use the messages – there are cases when this message represents the error.

See my test that uses reflection to go through all EF models to verify a presence of empty constructor. In that test I build a list of classes that fail the condition and in the end if this list is not empty, I throw assertion exception with string of comma-separated types that fail condition. This is not your conventional unit test, but still a valid test. With NUnit the assertion looks like this:

var finalMessage = String.Join(", ", errors);
Assert.IsEmpty(errors, finalMessage);

As part of the message I get on a test fail, I get a list of classes that fail my test. There is no other way to pass a message to developer about failed test.

If I move this to xUnit, assertion would look like this:

var finalMessage = String.Join(", ", errors);
Assert.False(errors.Any(), finalMessage)

Now this is a code smell. It is not immediately clear what we are asserting. Maybe I’m just picky, but I’m not very happy about these lines. I can predict that Brad Wilson would suggest to write an extension for [InlineData] attribute that provides list of Types to be tested. Yep, this sounds like a reasonable idea. But I could not find any documentation on how to do that.

2. Lack of documentation.

The previous point slowly leads to this one. I could not find any apprehensive document/site that has documented features of xUnit. Documentation for xUnit looks like this at the moment:

2014-02-15 19_58_58-xUnit.net - Unit testing framework for C# and .NET (a successor to NUnit) - Docu

There is a bunch of outdated articles linked from the home page. There is another bunch of blog posts which you need to find first. But there is nothing central anywhere. Unlike NUnit amazing documentation. In this sense NUnit beats many projects, even commercial ones, so can’t really compare.

To find out some specifics of operations I needed to troll through xUnit source code, only to find out that I’m looking on source code v2, where I was using 1.9.x. I was determined enough to get the source and find parts I needed. But less experienced developers would not do this and will struggle.

3. Unable to have “Manual execution only” tests

You can ignore NUnit tests with [Ignore] attribute. I would like to have a reason as a mandatory parameter in this attribute, but this is secondary. Ignored tests will be skipped in test runners, but you can specifically run this one ignored test manually. This is useful when you write exploratory tests – where you just trying things out and they are not really a test, but more like a piece of code in your system which you can execute separately from the whole thing. Or if these tests are integration tests interacting with external API, where you need to manually undo effects of the executed test. And I’m not the only one who uses this practice.

In xUnit you ignore tests like this: [Fact(Skip="run only manually")]. Only you can’t run them at all! Not even manually. And people want that! Jimmy Bogard restores test database with a manual execution of ignored test.. He came up with idea where skipped tests do run only when debugger is attached. Not a bad idea, also other people done the same.

Running these tests in debugger kinda work, but looks like a cludge to me. Why not allow for manual execution and do not multiply the hacks?

4. Unable to filter out tests by categories

NUnit has [Category()] attribute that marks the test(s) with some category. Usually these categories mark tests as “fast”, “slow”, “database”, “integration”, “smoke”, etc. So when you run these tests you can only include (or exclude) tests suitable for the environment. xUnit has Trait attribute which is a pair of strings. To create a category with Trait you’ll have to do this: [Trait("Category", "database")]. The key-value structure gives a bit more flexibility with categories. But I can’t come up with a scenario where I’d use other than “Category” for the trait key. Also, code examples shipped with xUnit 1.9.2 do have [Category("")] attribute which is inherited from Trait and placing the key to be "Category". But in xUnit v2 (which is in alpha just now) Trait attribute is sealed, so you can’t inherit it any more.

We do run our tests in TeamCity and way to execute xUnit is through MSBuild script. The problem is here: I can’t filter out tests by their Trait attribute when executing through MSBuild. And this is not an oversight. This is intentional. The idea behind this is “…place different test types into different assemblies, rather than use traits for the filtering.“. Excellent idea, I say! Let’s have a bunch of test assemblies to make Visual Studio even slower. A few months back I’ve merged a million (OK, there was 6) of our test assemblies into one project to speed up VS and reduce maintenance. Now let’s revert this and create a few extra test assemblies to filter out by test types.

See this scenario: In one of my projects I have integration tests that need database. Also I have integration tests that use Azure Emulator. On the build server in one step I execute all non-integrtion tests first, then if none fails, in the next step I re-create database and further build step executes database tests. See this video for reasons behind it. For these build steps I first need to filter out database tests, then to include only database tests.

Because I use hosted build server I can’t run Azure emulator on the build server and all my tests using storage emulator will fail without it, so I need to filter them out.

According to xUnit authors, these tests should live in 2 separate assemblies. I have about 10 db-tests and about 20 for Azure emulator tests. This is 2 extra assemblies with very small number of tests. Good practice? I don’t think so! Only encourages to make a mess – I’ve been there, I did not like it. Every separate test project in your solution doubles maintenance burden.

What about people who separate tests by being “fast” and “slow”. And execute fast first, slow later on their build server. Or ignore slow tests on CI build, run them only on nightly builds. There is no clear distinction between the tests, unlike in my example. And within the same class you can have slow tests next to fast ones, all testing the same SUT. How do you propose to work this one out? Throw tests from one assembly to another when they become slow? Now here is some serious mess waiting to be happen.

And if you can’t filter by Traits on build server, what is the point of them? Gui runners usually allow you to choose what tests you’d like to run. And on very rare occasion I filter out tests by their category in GUI.

I know, you can filter out by traits with console runner. I could not make it to work -(. Also this sounds a bit hypocritical to me: authors allow to filter out traits in console runner, but not in MSBuild runner, because MSBuild for automatic test execution.

Conclusion

While I enjoyed writing tests with AutoData attribute from Autofixture, I can’t really say xUnit solved my issues with test execution. There is a big possibility that I don’t understand a lot of concepts behind this framework, but there is no good place to go for an explanation. And if somebody has answers to my moans, please feel free to speak out in comments! I’d love to hear you proving me wrong. Because I hope I’m wrong here.

So far xUnit been a disappointment in my experience with a lot of hype around it. I’ll keep it for the cases where I’ll benefit from AutoDataAttribute, but all other tests will be based on NUnit.

  • Pingback: Convert your projects from NUnit/Moq to xUnit with NSubstitute « Trailmax Tech()

  • Bruno Juchli

    xUnit is on Github, where there is some documentation available.
    https://github.com/xunit/xunit

    As far as 1 – providing messages on failure – goes: lot of people are using the excellent FluentAssertions (https://github.com/dennisdoomen/FluentAssertions) library where you can specify messages.

    • Do you really consider xunit page on github to be a proper documentation? Pretty much same as on codeplex page for xunit: “How do I install xUnit?” seriously? And page about using the framework does not even mention Theories and data sources.
      People don’t want to spend any time writing tests, never mind learning the tricks about testing framework. And lack of proper docs (“How do I extend the framework?” “How to use data sources?”) does not help with gaining popularity with the general crowd of developers.

      As for the Fluent Assertions, I’ll have a look on that.
      However, this has nothing to do with xUnit, so lack of message is still a downside, not a serious one though.

      • Weston McNamee

        I have to agree with @bruno on this one. If the built-in asserts don’t cut it for you, don’t use them. There’s lots of assertion libraries out there. FluentAssertions is also the one I consider the best.

        • Yes, since the post written, I’ve started using FluentAssertions and they are very cool. Team mates love it – they say the assertions are a lot more readable.

  • newbie

    Thanks for the article. I am in the process of selecting a testing framework for an existing project. Your points are spot on the kind of the problems I face. For developers that write business application (as opposed to frameworks or libraries), ease of use is very importance. We want to leverage on testing, but not to spend too much time on extending testing framework. Out of box solutions are necessary.

  • Sam

    Regarding point 1 and the absence of Assert message parameters, you wrote:

    “There is no other way to pass a message to developer about failed test.”

    One exception to this is just throwing an exception with a message:

    if (!errors.Any())
    {
    var finalMessage = String.Join(“, “, errors);
    throw new Exception(finalMessage);
    }

    • That’ll work. Only not sure how the message will be presented by a test runner in a build server. And this is a bit out of general conventions in unit testing, where you do assertions – that does not look like an assertion and might confuse other developers.

  • Sam

    Regarding point 4, the lack of trait filtering via MSBuild has been marked as resolved on CodePlex: https://xunit.codeplex.com/workitem/9778.

    Also, TraitAttribute isn’t currently sealed, so you can inherit from it. (I just tested this to confirm.)

    • Sorry, missed this comment somehow.

      If it is fixed, it’ll be in v2.0 and this is not yet released for production. As far as I remember, TraitAttribute was sealed in v1.x and virtual in newer version (or is it the other way around?). Anyway, if it is virtual in v2 it’s great!

  • Ognyan Dimitrov

    In case this might be of assistance – you can easily mitigate most of your concerns :
    1. It is not that of an issue
    2.1. This site http://xunitpatterns.com/index.html
    2.2 this book http://www.amazon.com/xUnit-Test-Patterns-Refactoring-Code/dp/0131495054
    2.3 and this link http://stackoverflow.com/questions/9006615/are-there-any-good-documentations-books-tutorials-for-xunit-net must be sufficient documentation.
    3 and 4. Every reasonable test runner supports test sessions in which you can group tests

    xUnit is a great framework :)

    • 1. Yes. Not a problem anymore – I moved to FluentAssertions and it works great.
      2.1. Not really about xUnit framework documentation – collection of general testing patterns. And yes, I have seen this one.
      2.2. Same as above.
      2.3. Yep, seen that one too with every single page linked from there. Problem with these is distribution. There is no good central place for all the information. I have found how to do stuff on every occasion, but it took me a lot of effort and lots of google-foo. That’s not how to roll a testing framework. A lot of people DON’T struggle to write tests. And getting into nitty-gritty of testing framework is the last thing they want to do.

      3. Resharper test runner can’t run skipped tests. VS inbuilt test runner can’t do it. NCrunch can’t do it. Going out of IDE for stand-alone test runner (does that exist?) is too much ceremony. NOPE.
      4. All I need grouping for is a build server. Provided runner for build server could not do it (at the moment of writing this article: February’14).

      So what are the “reasonable test runner” are you talking about? I need it!

      Overall yes, the framework is OK. I love the way I can work with parameterised tests and how AutoFixture works with it. But in the longer run it makes absolutely no giggling difference what testing framework you use, as long as tests fail for the right reasons.

  • Ognyan Dimitrov
    • Yes, I tried that. Remembering that you need a debugger to run a test is rubbish. Trying to explain that to team mates is even worse -(