How to use OneTimeSetup? - c#

NUnit has a OneTimeSetup attribute. I am trying to think of some scenarios when I can use it. However, I cannot find any GitHub examples online (even my link does not have an example).
Say I had some code like this:
[TestFixture]
public class MyFixture
{
IProduct Product;
[OneTimeSetUp]
public void OneTimeSetUp()
{
IFixture fixture = new Fixture().Customize(new AutoMoqCustomization());
Product = fixture.Create<Product>();
}
[Test]
public void Test1()
{
//Use Product here
}
[Test]
public void Test2()
{
//Use Product here
}
[Test]
public void Test3()
{
//Use Product here
}
}
Is this a valid scenario for initialising a variable inside OneTimeSetup i.e. because in this case, Product is used by all of the Test methods?

The fixture setup for Moq seems reasonable to me.
However, reusing the same Product is suspect as it can be modified and accidentally reused which could lead to unstable tests. I would get a new Product each test run, that is if it's the system under test.

Yes. No. :-)
Yes... [OneTimeSetUp] works as you expect. It executes once, before all your tests and all your tests will use the same product.
No... You should probably not use it this way, assuming that the initialization of Product is not extremely costly - it likely is not since it's based on a mock.
OneTimeSetUp is best used only for extremely costly steps like creating a database. In fact, it tends to have very little use in purely unit testing.
You might think, "Well if it's even a little more efficient, why not use it all the time?" The answer is that you are writing tests, which implies you don't actually know what the code under test will do. One of your tests may change the common product, possibly even put it in an invalid state. Your other tests will start to fail, making it tricky to figure out the source of the problem. Almost invariably, you want to start out using SetUp, which will create a separate product for each one of your tests.

Yes, you use OneTimeSetUp (and OneTimeTearDown) to do all the setup that will be shared among tests in that fixture. It is run once per test run, so regardless of however many tests in the fixture the code will execute once. Typically you will initialize instances held in private fields etc for the other tests to use, so that you don't end up with lots of duplicate setup code.

Related

NUnit global SetupFixture not being executed

I am trying to execute some code before all tests - even more specifically - even before all test case sources being evaluated.
The supposed-to-work-solution is not executing the code (breakpoint is being hit after test case sources were evaluated):
[SetUpFixture]
public class GlobalSetup
{
[OneTimeSetUp]
public void Setup()
{
// Unfortunately does stuff after TestCaseSources
}
}
I managed to achieve this by creating a fake test fixture with static constructor like this:
[TestFixture]
public class GlobalSetup
{
//Step 1
static GlobalSetup ()
{
// Does stuff before every other TestCaseSource
}
//Step 3
[Test, TestCaseSource (nameof (GlobalSetup.TestInitializerTestCaseData))]
public void Setup () { }
//Step 2
public static IEnumerable TestInitializerTestCaseData => new[] { new TestCaseData () };
}
What is the correct way to not use this workaround?
As the comments in the issue Pablo notPicasso cited indicate, you are trying to use NUnit in a way it isn't designed to be used. A global setup is possible, of course, but it runs before all other test execution. Test case sources are evaluated as a part of test discovery which has to be complete before any test execution can begin.
It's usually possible to use a global setup by reducing the amount of work done in your TestCaseSource. This is desirable anyway - TestCaseSource should always be as lightweight as possible.
Your example doesn't indicate exactly what you are trying to do before the tests are loaded. If you add more info, I can refine this answer. Meanwhile, here are a couple of examples...
If your TestCaseSource is creating instances of the objects you are testing don't do that. Instead, initialize parameters like strings and ints that will be used in the SetUp or OneTimeSetup for your tests.
If your TestCaseSource is initializing a database don't do that. Instead save the parameters that are needed to initialize the database and do it in a OneTimeSetUp at some level in the test hierarchy.
Hopefully, you get the idea... or provide some further info on what you are doing.
The workaround you have selected looks like it would work in the current implementation. However, it depends on the exact, unspecified order in which NUnit does things internally. It could easily stop working in a future release.

How to execute every single test in a test class in an own process

We use C# and NUnit 3.6.1 in our project and we think of parallel test execution to reduce the duration. As far as I know, it is possible to parallel the execution of TestFixture with a Parallelizable-attribute for the class in an assembly. We want to go further and parallel all tests, that are marked with Testor TestCase, inside a test class.
A typical test class looks like:
[TestFixture]
public class MyTestClass
{
private MySUT _sut;
private Mock<IDataBase> _mockedDB;
[SetUp]
public void SetUpSUT()
{
_mockedDB = new Mock<IDataBase>();
_sut = new MySUT(_mockedDB.Object);
}
public void TearDownSUT()
{
_mockedDB = null;
_sut = null;
}
[Test]
public void MyFirstTC()
{
//Here do some testing stuff with _sut
}
[Test]
public void MySecondTC()
{
//Here do some testing stuff with _sut again
}
}
As you can see, we have some fields, that we use in every test. To have a better separation, we think of executing every single test in an own process. I have no idea how to do that, or do we have to change something different to make parallel execution possible?
Thanks in advance!
Yes you can run individual tests in separate threads (not processes), starting with NUnit 3.7.
Using the ParellelScope parameter to Parallelizable attribute. From NUnit documentation:
ParallelScope Enumeration
This is a [Flags] enumeration used to specify which tests may run in parallel. It applies to the test upon which it appears and any subordinate tests. It is defined as follows:
[Flags]
public enum ParallelScope
{
None, // the test may not be run in parallel with other tests
Self, // the test itself may be run in parallel with other tests
Children, // child tests may be run in parallel with one another
Fixtures // fixtures may be run in parallel with one another
}
NUnit has no builtin way to run individual tests in separate processes. It allows you to run entire assemblies in separate processes or individual tests within an assembly on separate threads, as you have discovered already.
The question has come up in the past as a possible extension to NUnit, but has never been acted on up to now because there doesn't seem to be a very wide demand for it.
Not a real positive answer, I'm afraid. :-(
In VSTS, you can check a box to isolate tests. You can also run in parallel or in series. In Visual Studio, I dont think its obvious/possible to do this. If its possible in VSTS, there is probably a command line you can use to pass the appropriate switches.

Is NUnit Setup attribute a code smell?

I have written a set of NUnit classes, which have Setup and TearDown attributes. Then I read this: http://jamesnewkirk.typepad.com/posts/2007/09/why-you-should-.html. I can understand what the author is saying where you have to scroll up and scroll down when reading the Unit Tests. However, I also see the benefit of Setup and TearDown. For example, in a recent test class I did this:
private Product1 _product1;
private Product2 _product2;
private IList<Product> _products;
[SetUp]
public void Setup()
{
_product1 = new Product();
_product2 = new Product();
_product = new List<Product>();
_products.Add(_product1);
_products.Add(_product2);
}
Here _product1, _product2 and _products is used by every test. Therefore it seems to violate DRY to put them in every method. Should they be put in every test method?
This question is very subjective, but I don't believe it is a code smell. The example in the linked blog post was very arbitrary and did not use the variables setup in every test. If you look at your code and you are not using the variables from the SetUp, then yes, it is probably a code smell.
If your tests are grouped well however, then each test fixture will be testing a group of functionality that often needs the same setup. In this case, the DRY principle wins out in my books. James argued in his post that he needed to look in three methods to see the state of the data, but I would counter that too much setup code in a test method obscures the purpose of the test.
Copying setup code like you have in your example also makes your tests harder to maintain. If your Product class changes in the future and requires additional construction, then you will need to change it in every test, not in one place. Adding that much setup code to each test method would also make your test class very long and hard to scan.

What's the best practice in organizing test methods that cover steps of a behavior?

I have a FileExtractor class which a Start method which does some steps.
I've created a test class called "WhenExtractingInvalidFile.cs" within my folder called "FileExtractorTests" and added some Test Methods inside it as below which should be verified as steps of the Start() method:
[TestMethod]
public void Should_remove_original_file()
{
}
[TestMethod]
public void Should_add_original_file_to_errorStorage()
{
}
[TestMethod]
public void Should_log_error_locally()
{
}
This way, it'd nicely organize the behaviors and the expectations that should be met.
The problem is that most of the logic of these test methods are the same so should I be creating one test method that verifies all the steps or separately like above?
[TestMethod]
public void Should_remove_original_file_then_add_original_file_to_errorStorage_then_log_error_locally()
{
}
What's the best practice?
While it's commonly accepted that the Act section of tests should only contain one call, there's still a lot of debate over the "One Assert per Test" practice.
I tend to adhere to it because :
when a test fails, I immediately know (from the test name) which of the multiple things we want to verify on the method under test went wrong.
tests that imply mocking are already harder to read than regular tests, they can easily get arcane when you're asserting against multiple mocks in the same test.
If you don't follow it, I'd at least recommend that you include meaningful Assert messages in order to minimize head scratching when a test fails.
I would do the same thing you did, but with a _on_start postfix like this: Should_remove_original_file_on_start. The latter method will only give you a maximum of one assert fail, even though all aspects of Start could be broken.
DRY - don't repeat yourself.
Think about the scenario where your test fails. What would be most useful from a test organization point?
The second dimension to look at is maintenance. Do you want to run and maintain 1 test or n tests? Don't overburden development with writing many tests that have little value (I think TDD is a bit overrated for this reason). A test is more valuable if it exercises a larger path of the code rather than a short one.
In this particular scenario I would create a single test. If this test fails often and you are not reaching the root cause of the problem fast enough, re-factor into multiple tests.

Unit Testing: Self-contained tests vs code duplication (DRY)

I'm making my first steps with unit testing and am unsure about two paradigms which seem to contradict themselves on unit tests, which is:
Every single unit test should be self-contained and not depend on others.
Don't repeat yourself.
To be more concrete, I've got an importer which I want to test. The Importer has a "Import" function, taking raw data (e.g. out of a CSV) and returning an object of a certain kind which also will be stored into a database through ORM (LinqToSQL in this case).
Now I want to test several things, e.g. that the returned object returned is not null, that it's mandatory fields are not null or empty and that it's attributes got the correct values. I wrote 3 unit tests for this. Should each test import and get the job or does this belong into a general setup-logic? On the other hand, believing this blog post, the latter would be a bad idea as far as my understanding goes. Also, wouldn't this violate the self-containment?
My class looks like this:
[TestFixture]
public class ImportJob
{
private TransactionScope scope;
private CsvImporter csvImporter;
private readonly string[] row = { "" };
public ImportJob()
{
CsvReader reader = new CsvReader(new StreamReader(
#"C:\SomePath\unit_test.csv", Encoding.Default),
false, ';');
reader.MissingFieldAction = MissingFieldAction.ReplaceByEmpty;
int fieldCount = reader.FieldCount;
row = new string[fieldCount];
reader.ReadNextRecord();
reader.CopyCurrentRecordTo(row);
}
[SetUp]
public void SetUp()
{
scope = new TransactionScope();
csvImporter = new CsvImporter();
}
[TearDown]
public void TearDown()
{
scope.Dispose();
}
[Test]
public void ImportJob_IsNotNull()
{
Job j = csvImporter.ImportJob(row);
Assert.IsNotNull(j);
}
[Test]
public void ImportJob_MandatoryFields_AreNotNull()
{
Job j = csvImporter.ImportJob(row);
Assert.IsNotNull(j.Customer);
Assert.IsNotNull(j.DateCreated);
Assert.IsNotNull(j.OrderNo);
}
[Test]
public void ImportJob_MandatoryFields_AreValid()
{
Job j = csvImporter.ImportJob(row);
Customer c = csvImporter.GetCustomer("01-01234567");
Assert.AreEqual(j.Customer, c);
Assert.That(j.DateCreated.Date == DateTime.Now.Date);
Assert.That(j.OrderNo == row[(int)Csv.RechNmrPruef]);
}
// etc. ...
}
As can be seen, I'm doing the line Job j = csvImporter.ImportJob(row);
in every unit test, as they should be self-contained. But this does violate the DRY principle and may possibly cause performance issues some day.
What's the best practice in this case?
Your test classes are no different from usual classes, and should be treated as such: all good practices (DRY, code reuse, etc.) should apply there as well.
That depends on how much of your scenario that's common to your test. In the blog post you refered to the main complaint was that the SetUp method did different setup for the three tests and that can't be considered best practise. In your case you've got the same setup for each test/scenario and then you should use a shared SetUp instead of duplicating the code in each test. If you later on find that there are more tests that does not share this setup or requires a different setup shared between a set of tests then refactor those test to a new test case class. You could also have shared setup methods that's not marked with [SetUp] but gets called in the beginning of each test that needs them:
[Test]
public void SomeTest()
{
setupSomeSharedState();
...
}
A way of finding the right mix could be to start off without a SetUp method and when you find that you're duplicating code for test setup then refactor to a shared method.
You could put the
Job j = csvImporter.ImportJob(row);
in your setup. That way you're not repeating code.
you actually should run that line of code for each and every test. Otherwise tests will start failing because of things that happened in other tests. This will become hard to maintain.
The performance problem isn't caused by DRY violations. You actually should setup everything for each and every test. These aren't unit tests, they're integration tests, you rely on external files to run the test. You could make ImportJob read from a stream instead of it directly opening a file. Then you could test with a memorystream.
Whether you move
Job j = csvImporter.ImportJob(row);
into the SetUp function or not, it will still be executed before every test is executed. If you have the exact same line at the top of each test, well then it is just logical that you move that line into the SetUp portion.
The blog entry that you posted complained about the setup of the test values being done in a function disconnected (possibly not on the same screen as) from the test itself -- but your case is different, in that the test data is being driven by an external text file, so that complaint doesn't match up with your specific use case either.
In one of my projects we agreed with team that we will not implement any initialization logic in unit tests constructors. We have Setup, TestFixtureSetup, SetupFixture (since version 2.4 of NUnit) attributes. They are enough for almost all cases when we need initialization. We force developers to use one of these attributes and to explicitly define whether we will run this initialization code before each test, before all tests in a fixture or before all tests in a namespace.
However I will disagree that unit tests should always confirm to all good practices supposed for a usual development. It is desirable, but it is not a rule. My point is that in real life customer doesn't pay for unit tests. Customer pays for the overall quality and functionality of the product. He is not interested to know whether you provide him a bug-free product by covering 100% of code by unit test/automated GUI tests or by employing 3 manual testers per one developer that will click on every piece of the screen after each build.
Unit tests don't add business value to the product, they allow you to save on development and testing efforts and force developers to write better code. So it is always up to you - will you spend additional time on UT refactoring to make unit tests perfect? Or will you spend the same amount of time to add new features for the customers of your product? Do not also forget that unit-tests should be as simple as possible. How to find a golden section?
I suppose this depends on the project, and PM or team lead need to plan and estimate quality of unit tests, their completeness and code coverage as if they estimate all other business features of your product. My opinion, that it is better to have copy-paste unit tests that cover 80% of production code then to have a very well designed and separated unit tests that cover only 20%.

Categories

Resources