V4 Documentation

Integration Testing

While the most obvious use for NCrunch is for rapidly running unit tests, it can also be used for integration testing.

Integration testing is a much more challenging field to play in for a concurrent test runner - and thus NCrunch needs to be correctly configured before it can properly add value in this area. However, running your integration tests continuously is enormously valuable and can save considerable development time.

Integration Testing Challenges

Before explaining the various ways in which integration tests can be run within NCrunch, it's first worth examining the key elements that make them different from unit tests:

Integration tests often take a long time to run

This is often the single biggest concern for running integration tests in NCrunch. It's not unusual for integration tests to take many minutes (or even hours) to run. This creates some interesting challenges around delivering meaningful feedback quickly without tying up engine resources for an extended period of time.

Integration tests can have side-effects

Because they often interact with resources outside the host process, integration tests can easily be responsible for leaving areas of a database or file system in an inconsistent state. This can be somewhat of an irritation, as the test runner may be manipulating these resources while you are trying to use them for other tasks (i.e. running your application in a UI).

Integration tests can have issues with concurrency

The interaction between integration tests and external resources can cause some interesting situations when tests are being run concurrently. An example could be two tests that both try to write to the same file on the file system at the same time.

Dealing With Long Running Tests

NCrunch will tend to prioritise long running tests so that they fit later in the test pipeline. After a long running test is kicked off, it will continue to run in the background against the source code as it existed when the test was first started. It is fairly normal for there to be many changes made to the source code while the test is running. NCrunch will track each of these changes individually and use them in order to merge meaningful information out of the long running test when it completes. This means that regardless of how old or out of date the source code was when the test started executing, NCrunch will always make the best use of any relevant information provided by the test.

The main complication with long running tests lies in the fact that they can block up NCrunch's processing queue. With the default configuration, NCrunch will only run one test at a time. This means that a long running test can stop NCrunch from reporting any information on faster (and sometimes more relevant) tests for the duration of its execution.

Because of this, it is absolutely essential that you make use of NCrunch's ability to run tests in parallel.

However, even when using parallel execution, the behaviour of the NCrunch engine is such that it will not interrupt any test that is partway executing. This means it is still possible for a number of long running tests to collaboratively block up the processing queue. To prevent this from happening, where possible you should ensure you have a processing thread reserved for executing fast tests only by making use of the Fast Lane Threads configuration option.

Dealing With Test Side-Effects

There are many tricks you can use to engineer your integration tests so that they do not have side-effects. These tricks include correct use of database transactions and randomisation of file names and/or isolation of test activities on the file system.

Where it is impossible to remove side-effects, they can often still be managed unobtrusively.

The most common irritating side-effect experienced when continuously running integration tests against a database is that they can constantly nuke or manipulate test data held in the database that may actually be under use by someone performing manual testing.

An elegant solution to this problem is to ensure the database dependent tests are only run continuously against a different database or database schema. This can be achieved by introducing alternative behaviour in tests that are run by NCrunch vs when they are run with a manual test runner.

NCrunch will religiously set the environment variable 'NCrunch' equal to '1' inside each of its task runner processes. This applies to both build tasks and test tasks. You can make use of this environment variable to redirect your continuous tests to a different schema/database, for example:

	if (Environment.GetEnvironmentVariable("NCrunch") == "1")
		connectionString = "server=localhost;database=myDatabase_ncrunch;integrated security=SSPI";
		connectionString = "server=localhost;database=myDatabase;integrated security=SSPI";

You can also make use of the NCrunchEnvironment.NCrunchIsResident() method, which makes use of the environment variable itself. There are also other ways to introduce alternative build and test behaviour for code being executed by NCrunch.

Dealing With Concurrency Issues

When running integration tests concurrently, it's important that you properly attribute your tests with ExclusivelyUsesAttribute and InclusivelyUsesAttribute where appropriate. This will prevent the testing engine from running mutually exclusive tests at the same time. An easier but more poorly performing solution is to use SerialAttribute.

It is also possible to engineer your tests in such a way that parallel execution does not cause concurrency issues.

Consider that with the behaviour of NCrunch's test pipeline, it's possible for a test to be run concurrently with another version of itself (unless the Allow Tests To Run In Parallel With Themselves setting is changed from its default). Therefore you should make use of the above attributes for all tests that make use of resources outside the test application domain, regardless of whether these tests share resources with other tests.

Partially Continuous or Manual Execution

Sometimes it's simply easier to avoid running some integration tests continuously, and to run them manually instead. The best way to do this with NCrunch is to create a new custom engine mode with criteria set to exclude integration tests that you don't want to run continuously.

The engine mode customisation window allows you to add a filter that will exclude tests by their category. You can then categorise your tests according to whether they should be run continuously or not. Note: Tests that are not run continuously must be run manually regularly to ensure their code coverage and performance information stays up to date.

Trial NCrunch
Take NCrunch for a spin
Do your fingers a favour and supercharge your testing workflow
Free Download