Documentation

Documentation
V5 Documentation

Performance Tuning

The default configuration for NCrunch tends to favour compatibility over performance. Once you are familiar with the basic workings of the engine, it is worthwhile to make an effort to tune NCrunch more specifically for your hardware and the projects you are working on.

Optimising CPU Utilisation

NCrunch's global configuration options focus on settings that are specific to the machine you are using. Performance wise, the most important options to adjust are the CPU Cores Assigned To NCrunch and CPU Cores Assigned To IDE. These options control how NCrunch distributes its processing load between your CPUs. Assigning more cores to NCrunch will allow it to process its test pipeline faster, but beware of taking too much CPU away from your IDE as this may cause the IDE to act sluggish and jerky.

When assigning a CPU core to be used for your IDE or the background processing, NCrunch will hard lock the CPU at kernel level to only process the work it has been assigned to do (within the scope of the IDE and NCrunch processes). The major benefit of this is that a certain percentage of your overall CPU time can be reserved entirely for handling actions within the IDE without ever being interrupted by NCrunch's background churning.

The optimal configuration of your CPU cores will depend entirely on your situation. If you are operating within a large solution and you are also using Jetbrains' Resharper with solution-wide analysis enabled or you are running on VS2015 (or above), it is recommended that you treat the IDE as a first class citizen and provide it with at least two processor cores (if possible). VS2022 usually needs at least 4 CPU cores. If you are working on smaller projects with older versions of Visual Studio (i.e. VS2013 and below) and are not using real-time code analysis tools, one CPU core may be enough to give adequate performance.

It is strongly recommended that you try to avoid assigning CPU cores to both the IDE and NCrunch. This can cause excessive purging of CPU cache as processor cores flick between tasks, and it will likely impact the performance of your IDE.

Where the hardware is available, it's also very much worth looking into offloading some (or all) of NCrunch's processing onto other computers using distributed processing.

If you are using RDI and you are struggling with CPU consumption, it may be worth disabling RDI for large tests using EnableRdiAttribute or setting your method data limit to a lower value. A lower method data limit will force RDI to stop collecting data earlier for high-CPU methods, freeing up CPU at the expense of less data being available in RDI overlays.

Optimising Processing Times

Reducing processing times is all about parallel execution of tasks in the processing queue.

The Max number of processing threads configuration option controls the number of background tasks that can be executed by NCrunch at any one time (as visible in the processing queue).

Under normal circumstances, you should set this value to be equal to the number of processing cores you have assigned to NCrunch. If you are running automated tests that do a large amount of blocking (I.e. on sockets, timers, database calls, etc), you may find it useful to set this value slightly higher - as there should be more CPU slack to use.

Note that setting this value too high can cause the CPU to become overloaded, causing excessive context switching and slowing down the cycle times of NCrunch's core engine. It's often worth running through a few test cycles with a CPU performance monitor running in order to strike the right balance.

If you are experiencing issues with CPU-intensive tests running slowly under NCrunch, there are various options for resolving this issue.

Optimising Memory Usage

NCrunch will generally kick off new processes as they are required, and will pool them when they are not being used. The pooling helps to ensure that NCrunch will not waste time spawning new .exes when existing processes already exist that can do the job.

The Max number of test runner processes to pool configuration option controls the size of the pool of worker processes used to perform test execution tasks.

NCrunch uses two process pools - one for building components, and one for running tests. Because NCrunch will always spawn enough processes to cater for all tasks being executed by its engine, the physical number of task runner processes may vary considerably.

If you are running some very heavy-weight tests and you find your memory utilisation is too high, try turning down the Max number of test runner processes to pool configuration option. If NCrunch is still using too much memory, you should reduce the Max number of processing threads setting to a lower number to prevent the engine from kicking off too much work at the same time. Another setting that can greatly reduce memory consumption is Terminate test runner tasks when all test execution is complete, as this will prevent test runner processes from being left around by NCrunch when it is less likely that they will be further used.

When working with tests or builds that tend to accumulate (leak?) memory over time, it's worth taking a look at the Test process memory limit and Build process memory limit global configuration options.

Optimising Build Times

NCrunch generally tends to behave better with solutions that are made up of a large number of small projects. This is because the optimisations in the core engine allow NCrunch to only build the components it actually needs to, rather than the entire set (or a large subset).

Creating solutions with large numbers of components can have some drawbacks - one of which being that it may make a codebase harder to understand. This is something that should be balanced carefully, as having a large component with 200,000 lines of code could easily add a huge amount of time to any of NCrunch's engine cycles.

Unless it is absolutely required, ensure you set the Copy referenced assemblies to workspace project-level configuration option to false. Leaving this setting turned on will force NCrunch to build referencing assemblies every time their dependencies are changed - greatly increasing the amount of work the engine needs to do. The copying of assemblies around the file system will also marginally increase build times for the component.

Optimising Tests

Writing high quality automated tests has always been important, and working with NCrunch makes this even more so.

High quality tests should be:

  • Atomic
  • Repeatable
  • Consistent
  • Without side-effects
  • Able to be paralleled
  • FAST

The first 4 above points are a basic requirement of any automated test used regularly within a development process. Any tests that do not meet these four requirements you should deliberately ignore or exclude from continuous execution (using a custom engine mode) with NCrunch until they can be fixed up, or flag them with the Isolated attribute so that they can be run in their own task runner process.

Tests that cannot be paralleled are a performance constraint for NCrunch and must be marked using the ExclusivelyUses and InclusivelyUses attributes. Failing to do this will result in the tests interfering with each other and going haywire when NCrunch is set to run them in parallel.

The speed of tests under execution is always very important, as faster tests can be run more frequently and with a lower risk of blocking up the test pipeline.

NCrunch's instrumentation can add considerable weight to some tests with performance-critical loops. If you notice any of your tests running significantly slower with NCrunch, consider turning off the Analyse line execution times project-level configuration setting, or look at suppressing code coverage for lines of code that are executed with very high frequency.

Generally speaking, high quality tests are critical to an ideal experience when working with NCrunch.

Trial NCrunch
Take NCrunch for a spin
Do your fingers a favour and supercharge your testing workflow
Free Download