V3 Documentation
Search

Unique Test Names

Summary

When working with NCrunch, it is extremely important that the names of your tests be distinctive, consistent and unique. This includes test names that are automatically derived from theory or test case parameters. Test names that are not unique between tests or are derived from randomly generated parameters cannot be reliably executed by NCrunch and will cause considerable problems for the engine.

Why?

Under NCrunch, each test has a lifespan that goes far beyond that of the physical process in which it is constructed. Tests first come to life when they are discovered by NCrunch during an analysis step, which involves loading the test assembly and interrogating it for test methods through integration with a test framework. Once a test is discovered, it is stored within NCrunch's cache system (and later in a .cache file) along with all relevant data, such as the time taken to execute it, code coverage and performance information, trace output, etc.

For such data to retain meaning and for the test to be passed to a test framework for execution, there must be a reliable way to identify the test and separate it from other tests in the same assembly. Because of the ever changing state of source code, the only way to identify a test is using its physical name, which is usually a string derived from its fixture, method and parameters.

NUnit has its own internal method of identifying tests using a sequencing system, where the ID of a test is determined by its position in the assembly. This is inadequate for NCrunch, because when source code is changed, sequence is also changed. Inserting a test partway into an assembly would result in all successive identifiers being invalidated, losing important data. Thus NCrunch does not piggyback off this system and constructs its own test name using the various components of the test (fixture, method, parameters).

Why don't other runners have this problem?

Other test runners usually only show results from a single run through your test assembly. They discover the tests, hold them in memory, execute them, report the results, then discard everything on completion. This allows them to hold individual test cases in memory and identify them through memory address. Because there is no requirement to store results from the tests across multiple runs, or split execution over multiple processes simultaneously, there is no need to identify tests outside of the process in which they are initially discovered.

It is because of NCrunch's features that additional constraints on test distinctiveness and uniqueness exist.

Is this ever going to be fixed?

No. Unlike functional defects or temporary compatibility issues, problems caused by tests not being uniquely identifiable are derived from technical limitations rather than oversight or incorrect design. Features of test frameworks that allow such tests to exist are fundamentally incompatible with next generation test runners such as NCrunch. The only way to make these tests work would be to remove all the features that make NCrunch worth using.

Common Examples of Problematic Test Naming

NUnit TestCaseSource With User Defined Type

Consider the following code.

using System;
using System.Collections.Generic;
using NUnit.Framework;

public class Fixture
{
    public static IEnumerable<TestCaseParameter> TestCases
    {
        get 
        { 
            return new[]
            {
                new TestCaseParameter(1),
                new TestCaseParameter(2)
            }; 
        }
    }

    [Test, TestCaseSource("TestCases")]
    public void Test(TestCaseParameter parameter)
    {
        if (parameter.Value == 1)
            Console.WriteLine("First");
        if (parameter.Value == 2)
            Console.WriteLine("Second");
    }
}

public class TestCaseParameter
{
    public TestCaseParameter(int value)
    {
        Value = value;
    }

    public int Value;
}

The above code contains two test cases that have different instances of the same user defined type passed as parameters. The user defined type does not implement .ToString(), so there is no way to tell the difference between any of its instances outside of the process in which they are created. This code results in two test cases constructed with the same visible and internal name. The correct solution to this problem is to implement .ToString() on the user defined type and ensure all relevant data is included in the result.

NUnit Test Case With Inconsistent Name

Consider the following code.

using System;
using System.Collections.Generic;
using NUnit.Framework;

public class Fixture
{
    public static IEnumerable<long> TestCases
    {
        get
        {
            return new[]
            {
                DateTime.Now.Ticks,
            };
        }
    }

    [Test, TestCaseSource("TestCases")]
    public void Test(long value)
    {
        Console.WriteLine(value);
    }    
}

The above code contains a single test case that uses an inconsistent value for its sole parameter. Every time the test case is discovered, the parameter has a different value. NCrunch is unable to correlate data for this test case because it does not have a consistent name. This means that every time the code is compiled, the test is completely rediscovered and is treated as an entirely new test, with all existing coverage and result data discarded.

Avoid using unstable test case parameters that do not give a consistent value. A test case parameter must be fixed to a single distinctive value to be fed into a name that can uniquely identify the test.

NUnit Random Attribute

Consider the following code.

using System;
using NUnit.Framework;

public class Fixture
{
    [Test]
    public void Test([Random(1)] long value)
    {
        Console.WriteLine(value);
    }    
}

This code uses NUnit's Random Attribute to automatically generate a parameter for a test method. Every time the test is discovered, it has a different parameter value. This results in NCrunch being unable to retain data for the test because it does not have a consistent name. Every time the code is compiled, the test is completely rediscovered and treated as a new test. All data is thus discarded.

Avoid using NUnit's Random Attribute. This attribute is not supported by NCrunch.

Xunit Theory With AutoFixture

AutoFixture is a popular library that can be used to provide automatic substitution of theory test case parameters where these parameters are not considered relevant to the test case. Parameters are substituted using randomly generated values.

Consider the following code.

using Ploeh.AutoFixture;
using System.Collections.Generic;
using Xunit;

namespace XUnitAutoFixture
{
    public class TestFixture
    {
        private static readonly Fixture Fixture = new Fixture();

        public static IEnumerable<object[]> SomeTestData()
        {
            yield return new object[] { Fixture.Create<long>() };
            yield return new object[] { Fixture.Create<long>() };
        }

        [Theory, MemberData(nameof(SomeTestData))]
        public void Test(object value)
        {

        }
    }
}

The above code creates two test cases using randomly generated parameters. AutoFixture encapsulates the random generation, so it is not visible here. Every time these test cases are created, they have different parameter values. This means the test cases always have a different name every time they are discovered by NCrunch. The names of the test cases also change inside subsequent test processes used for execution, preventing NCrunch from reliably instructing Xunit to target them for execution. The result is an error given every time the tests are executed by NCrunch.

Avoid using random generation to produce test case parameters when working with any framework or toolset. If necessary, hard code some values instead.

Xunit Theory With AutoFixture (AutoData)

using Ploeh.AutoFixture.Xunit;
using Xunit;

namespace XUnitAutoFixture
{
    public class TestFixture
    {
        [Theory]
        [AutoData()]
        public void Test(int value)
        {

        }
    }
}

The above code is another example of using AutoFixture to automatically substitute test parameters with random values. As with the earlier example, the parameter values feed inconsistent data into the test's name, preventing the test from being uniquely identified by NCrunch.