NCrunch does its best to behave in a similar way to other test runners, by ensuring the test is executed within its own process and within a workspace almost identical to the area of the file system usually used to execute it. However, tests that make unusual assumptions about the environment they run in may require some minor adjustments before they can be crunched correctly. If you have tests that fail within NCrunch but not within your normal test runner, it's worth reading through the tips below as they will likely help you solve this problem.
A common reason for tests failing artificially in NCrunch is that they are depending on files that are not referenced by their encompassing test project. For example - your test may make use of a resource file that it attempts to pull data from using a relative reference. Resource files that are not included in the test component's project file are unknown to NCrunch and they will not be copied to the workspace used to run the test.
There are multiple solutions to this problem.
Another common cause for test failures is where custom builds (or build steps) are being used to produce artifacts required for the runtime execution of tests. An example of this could be a .config file substitution step that is used as a post build event to supply configuration to the test process.
In any case, this is actually a build issue and should be troubleshooted as such.
Sometimes tests make dynamic use of their referenced assemblies at runtime (for example, by specifically loading them into a separate application domain). It's important to consider that when NCrunch builds components, it will avoid copying their referenced assemblies into the build output directory unless the Copy referenced assemblies to workspace option is set to true.
This can be a less-than-obvious source of problems for tests that make assumptions about the files that exist in their working directly while they are run. For example, you may have a test that tries to establish a path for resource files by using the codebase or location of one of the assemblies in the test's application domain. This test may work fine outside NCrunch because it relies on all the assemblies being executed in the same place, though it may fail when executed by NCrunch as the application domain could be wired together from assemblies that exist in different workspaces.
Read more about these problems and how to solve them.
When configured to execute tests concurrently, NCrunch needs to be made aware of any out-of-process resources used by tests that cannot be used by multiple tests running at the same time. An example of this could be a database table that tests are both reading to and writing to - as these tests may overlap and cause strange behaviour.
If you are experiencing issues with tests intermittently failing due to parallel execution, read more about these issues and how you can solve them.
If you are experiencing issues with tests intermittently failing, even with parallel execution disabled, then it's possible that your tests have been written with certain assumptions about how they should be executed. Learn more about how NCrunch's testing environment can be different to other test runners.
With normal test runners, it's typical for tests to generally be run in a certain sequence. This can cause various cross-test issues to appear when they are run outside this sequence. It's possible to be blissfully unaware of this until the testing sequence changes and a whole range of issues occur. NCrunch can often surface these issues due to the behaviour of its test pipeline.
Sequence changes performed by NCrunch's engine can also include re-running tests within the same process. Some tests are written specifically to only run once within a process because other test runners will usually build up and tear down a process for a single execution run. This is an extremely inefficient use of resources for a continuous test runner, so NCrunch will pool the processes and re-use them as necessary. Ensure that your tests are safely re-runnable and do not leave behind erroneous state.
Troubleshooting these sorts of issues when they occur can be quite difficult. A general plan of attack is often to selectively try running tests in a different sequence in order to try and consistently reproduce the problem.
If you right-click a test from anywhere within NCrunch, under the 'Advanced' menu you'll find an option that allows a test to be executed within an existing process. Doing this can be useful to help narrow down sequencing and state related issues by allowing you to specify the sequence tests are executed in.
If you have tests that tend to give inconsistent results because of sequence, state or timing issues, it may be worth flagging them with NCrunch's Isolated Attribute so that they are run in isolation.
NCrunch enforces a default timeout for all tests it executes, which is not standard behaviour for other test runners. If you experience problems with tests timing out, consider learning more about timeouts and how you can control them.
NCrunch's MSTest runner merely emulates MSTest execution behaviour without actually relying on MSTest directly. This gives many advantages in terms of performance and reliability, but it does also introduce differences in the way that MSTest resources are handled. MSTest resource copying behaviour also varies depending upon the version of MSTest being used, which can make certain situations confusing.
MSTest supports declaring files required in the test environment using global configuration (i.e. .testsettings files). NCrunch does not support these files or their contents in any way, which means that files declared for deployment in MSTest global configuration will not be copied into the testing sandbox.
To work around this, either make use of the MSTest [DeploymentItem] attribute to declare the files in tests that require them, or reference the files relative to the working directly of the test (i.e. upwards from the testing sandbox and into the project itself).
This exception can be thrown inside the NCrunch test environment when working with projects that make use of the security features of the .NET framework.
A common cause is incompatibility between the implementation of the CLR security model between v2.0 and v4.0 of the .NET framework. In v4.0 of the .NET framework, Microsoft introduced changes to the security model that are not compatible with the old CAS approach used in prior versions.
As NCrunch injects code with dependencies on .NET v2.0 assemblies, this can cause a security model clash for projects using v4.0 security features.
Broadly, there are two ways of solving this problem:
Because NCrunch manipulates the assemblies that are output from the build process, it needs to re-sign anything it touches. If you are experiencing signing related issues while attempting to build or run tests, it's possible that you're using a more complex signing approach than NCrunch was designed to handle (for example, you may be using a partial key or password protected key file).
NCrunch has a project level configuration option, Prevent signing of output assembly, that if set to true will suppress signing of an output assembly and should allow you to work around these issues. Note that If you set this configuration option to true for one project, you may also need to do this for all the projects in your solution to keep the signing approach consistent.
If you experience this exception while executing a test under NCrunch, but not in any other test runner, then this is due to differences in how test runners choose platforms for their test environments (see X86/X64 Platform Issues).
Some tests are designed in such a way as to make them fundamentally incompatible with NCrunch. For these tests, it can often be useful to override certain aspects of their behaviour to introduce alternative logic when they are executed by NCrunch.
If you are making use of NUnit V2's [Random] attribute, you will need to set your Framework utilisation type for NUnit to StaticAnalysis. Note that changing this setting will also disable support for different edge NUnit features. It is recommended that you implement the random behaviour inside your test itself instead of using the NUnit attribute.
If you are using xUnit tests declared in nested classes, or making use of the DisplayName xUnit Fact attribute parameter, you may wish to consider setting the Framework utilisation type for Gallio/xUnit to DynamicAnalysis to enable better support for these tests. However, this will greatly increase the time taken by NCrunch to build and analyse your assemblies.
It should be considered a worthy goal to try and keep all your tests running continuously under NCrunch, but sometimes this just isn't possible. Some tests are just too big, too clunky, or have too many side-effects to be regularly thrashed by a continuous test runner.
NCrunch will respect test framework Ignore, Explicit and Pending attributes, though for situations where you need to be a little blunter or you just can't be bothered, it's also possible to ignore tests through the Tests window.