The questions is pretty straightforward and I need to know how to create multiple cleanup tests.
I have some tests, and each test creates a different file. I would like to bind a cleanup action to each, so I can delete the specified files for each test.
eg:
[TestMethod]
public void TestMethodA()
{
// do stuff
}
[TestMethod]
public void TestMethodB()
{
// do stuff
}
[TestCleanup]
public void CleanUpA()
{
// clean A
}
[TestCleanup]
public void CleanUpB()
{
// clean B
}
Any ideas?
There are, potentially, a couple of options that I can see. A simple solution that might work for you is to have a class-level variable in your unit test class that stores the path to the file used by the currently executing test. Have each test assign the current file path to that variable. Then you can have a single cleanup method that uses that variable in order to clean up the file.
Another idea, but one that might require significant refactoring, is to use a dependency-injection approach in order to abstract your code from the file system. Perhaps instead of your code creating/opening the file itself, you could have a an IO abstraction component handle creating the file and then just return a Stream object to your main code. When running your unit tests you could provide your main code with a unit testing version of the IO abstraction component that will return a MemoryStream instead of a FileStream, which would avoid the need to perform cleanup. I could update with a rough example if my explanation isn't clear.
Related
I am trying to execute some code before all tests - even more specifically - even before all test case sources being evaluated.
The supposed-to-work-solution is not executing the code (breakpoint is being hit after test case sources were evaluated):
[SetUpFixture]
public class GlobalSetup
{
[OneTimeSetUp]
public void Setup()
{
// Unfortunately does stuff after TestCaseSources
}
}
I managed to achieve this by creating a fake test fixture with static constructor like this:
[TestFixture]
public class GlobalSetup
{
//Step 1
static GlobalSetup ()
{
// Does stuff before every other TestCaseSource
}
//Step 3
[Test, TestCaseSource (nameof (GlobalSetup.TestInitializerTestCaseData))]
public void Setup () { }
//Step 2
public static IEnumerable TestInitializerTestCaseData => new[] { new TestCaseData () };
}
What is the correct way to not use this workaround?
As the comments in the issue Pablo notPicasso cited indicate, you are trying to use NUnit in a way it isn't designed to be used. A global setup is possible, of course, but it runs before all other test execution. Test case sources are evaluated as a part of test discovery which has to be complete before any test execution can begin.
It's usually possible to use a global setup by reducing the amount of work done in your TestCaseSource. This is desirable anyway - TestCaseSource should always be as lightweight as possible.
Your example doesn't indicate exactly what you are trying to do before the tests are loaded. If you add more info, I can refine this answer. Meanwhile, here are a couple of examples...
If your TestCaseSource is creating instances of the objects you are testing don't do that. Instead, initialize parameters like strings and ints that will be used in the SetUp or OneTimeSetup for your tests.
If your TestCaseSource is initializing a database don't do that. Instead save the parameters that are needed to initialize the database and do it in a OneTimeSetUp at some level in the test hierarchy.
Hopefully, you get the idea... or provide some further info on what you are doing.
The workaround you have selected looks like it would work in the current implementation. However, it depends on the exact, unspecified order in which NUnit does things internally. It could easily stop working in a future release.
NUnit has a OneTimeSetup attribute. I am trying to think of some scenarios when I can use it. However, I cannot find any GitHub examples online (even my link does not have an example).
Say I had some code like this:
[TestFixture]
public class MyFixture
{
IProduct Product;
[OneTimeSetUp]
public void OneTimeSetUp()
{
IFixture fixture = new Fixture().Customize(new AutoMoqCustomization());
Product = fixture.Create<Product>();
}
[Test]
public void Test1()
{
//Use Product here
}
[Test]
public void Test2()
{
//Use Product here
}
[Test]
public void Test3()
{
//Use Product here
}
}
Is this a valid scenario for initialising a variable inside OneTimeSetup i.e. because in this case, Product is used by all of the Test methods?
The fixture setup for Moq seems reasonable to me.
However, reusing the same Product is suspect as it can be modified and accidentally reused which could lead to unstable tests. I would get a new Product each test run, that is if it's the system under test.
Yes. No. :-)
Yes... [OneTimeSetUp] works as you expect. It executes once, before all your tests and all your tests will use the same product.
No... You should probably not use it this way, assuming that the initialization of Product is not extremely costly - it likely is not since it's based on a mock.
OneTimeSetUp is best used only for extremely costly steps like creating a database. In fact, it tends to have very little use in purely unit testing.
You might think, "Well if it's even a little more efficient, why not use it all the time?" The answer is that you are writing tests, which implies you don't actually know what the code under test will do. One of your tests may change the common product, possibly even put it in an invalid state. Your other tests will start to fail, making it tricky to figure out the source of the problem. Almost invariably, you want to start out using SetUp, which will create a separate product for each one of your tests.
Yes, you use OneTimeSetUp (and OneTimeTearDown) to do all the setup that will be shared among tests in that fixture. It is run once per test run, so regardless of however many tests in the fixture the code will execute once. Typically you will initialize instances held in private fields etc for the other tests to use, so that you don't end up with lots of duplicate setup code.
We use C# and NUnit 3.6.1 in our project and we think of parallel test execution to reduce the duration. As far as I know, it is possible to parallel the execution of TestFixture with a Parallelizable-attribute for the class in an assembly. We want to go further and parallel all tests, that are marked with Testor TestCase, inside a test class.
A typical test class looks like:
[TestFixture]
public class MyTestClass
{
private MySUT _sut;
private Mock<IDataBase> _mockedDB;
[SetUp]
public void SetUpSUT()
{
_mockedDB = new Mock<IDataBase>();
_sut = new MySUT(_mockedDB.Object);
}
public void TearDownSUT()
{
_mockedDB = null;
_sut = null;
}
[Test]
public void MyFirstTC()
{
//Here do some testing stuff with _sut
}
[Test]
public void MySecondTC()
{
//Here do some testing stuff with _sut again
}
}
As you can see, we have some fields, that we use in every test. To have a better separation, we think of executing every single test in an own process. I have no idea how to do that, or do we have to change something different to make parallel execution possible?
Thanks in advance!
Yes you can run individual tests in separate threads (not processes), starting with NUnit 3.7.
Using the ParellelScope parameter to Parallelizable attribute. From NUnit documentation:
ParallelScope Enumeration
This is a [Flags] enumeration used to specify which tests may run in parallel. It applies to the test upon which it appears and any subordinate tests. It is defined as follows:
[Flags]
public enum ParallelScope
{
None, // the test may not be run in parallel with other tests
Self, // the test itself may be run in parallel with other tests
Children, // child tests may be run in parallel with one another
Fixtures // fixtures may be run in parallel with one another
}
NUnit has no builtin way to run individual tests in separate processes. It allows you to run entire assemblies in separate processes or individual tests within an assembly on separate threads, as you have discovered already.
The question has come up in the past as a possible extension to NUnit, but has never been acted on up to now because there doesn't seem to be a very wide demand for it.
Not a real positive answer, I'm afraid. :-(
In VSTS, you can check a box to isolate tests. You can also run in parallel or in series. In Visual Studio, I dont think its obvious/possible to do this. If its possible in VSTS, there is probably a command line you can use to pass the appropriate switches.
I'm having a little trouble understanding how to approach the following in order to unit test the class.
The object under test is an object that consists out of 1 public method, one that accepts a list of objects of type A and returns an object B (which is a binary stream).
Due to the nature of the resulting binary stream, which gets large, it is not a nice comparison for the test output.
The stream is built using several private instance helper methods.
class Foo
{
private BinaryStream mBinaryStream;
public Foo() {}
public BinaryStream Bar(List<Object> objects) {
// perform magic to build and return the binary stream;
// using several private instance helper methods.
Magic(objects);
MoreMagic(objects);
}
private void Magic(List<Object> objects) { /* work on mBinaryStream */ }
private void MoreMagic(List<Object> objects) { /* work on mBinaryStream */ }
};
Now I know that I need to test the behaviour of the class, thus the Bar method.
However, it's undoable (both space and time wise) to do compare the output of the method with a predefined result.
The number of variations is just too large (and they are corner cases).
One option to go for is to refactor these private helper methods into (a) separate class(es) that can be unit tested. The binary stream can then be chopped into smaller better testable chunks, but also here goes that a lot of cases need to be handled and comparing the binary result will defy the quick time of a unit test. It an option I'd rather not go for.
Another option is to create an interface that defines all these private methods in order to verify (using mocking) if these methods were called or not. This means however that these methods must have public visibility, which is also not nice. And verifying method invocations might be just enough to test for.
Yet another option is to inherit from the class (making the privates protected) and try to test this way.
I have read most of the topics around such issue, but they seem to handle good testable results. This is different than from this challenge.
How would you unit test such class?
Your first option (extract out the functionality into separate classes) is really the "correct" choice from a SOLID perspective. One of the main points of unit testing things (and TDD by extension) is to promote the creation of small, single responsibility classes. So, that is my primary recommendation.
That said, since you're rather against that solution, if what you're wanting to do is verify that certain things are called, and that they are called in a certain order, then you can leverage Moq's functionality.
First, have BinaryStream be an injected item that can be mocked. Then setup the various calls that will be made against that mock, and then do a mockStream.VerifyAll() call on it - this verifies that everything that you setup for that mock was called.
Additionally, you can also setup a mock to do a callback. What you can do with this is setup an empty string collection in your test. Then, in the callback of the mock setup, add a string identifying the name of that function called to the collection. Then after the test is completed, compare that list to a pre-populated list containing the calls that you expect to have been made, in the correct order, and do an EqualTo Assert. Something like this:
public void MyTest()
{
var expectedList = new List<string> { "SomeFunction", "AnotherFunction", ... };
var actualList = new List<string>();
mockStream.Setup(x => x.SomeFunction()).Callback(actualList.Add("SomeFunction"));
...
systemUnderTest.Bar(...);
Assert.That(actualList, Is.EqualTo(expectedList));
mockStream.VerifyAll();
}
Well you are on top of how to deal with the private methods. Testing the stream for the correct output. Personally I'd use a very limited set of input data, and simply exercise the code in the unit test.
All the potential scenarios I'd treat as an integration test.
So have a file (say xml) with input and expected output. Run through it, call the method with the input and compare actual output with expected, report differences. So you could do this as part of checkin, or before deploy to UAT or some such.
Don't try to test private methods - they don't exist from consumer point of view. Consider them as named code regions which are there just to make your Bar method more readable. You always can refactor Bar method - extract other private methods, rename them, or even move back to Bar. That is implementation details, which do not affect class behavior. And class behavior is exactly what you should test.
So, what is behavior of your class? What are consumer expectations from your class? That is what you should define and write down in your tests (ideally just before you make them pass). Start from trivial situations. What if list of objects is empty? Define behavior, write test. What if list contains single object? If behavior of your class is very complex, then probably your class doing too many things. Try to simplify it and move some 'magic' to dependencies.
In brief: What is the pattern called in the following code, and how should it be tested?
The purpose of the code is to encapsulates a number of actions on a zip file (written in C#, although the pattern is language independent):
public class ZipProcessor
{
public ZipProcessor(string zipFilePath) { ... }
public void Process()
{
this.ExtractZip();
this.StepOne();
this.StepTwo();
this.StepThree();
this.CompressZip();
}
private void ExtractZip() { ... }
private void CompressZip() { ... }
private void StepOne() { ... }
private void StepTwo() { ... }
private void StepThree() { ... }
}
The actual class has around 6 steps, and each step is a short method, 5-15 lines long. The order of the steps is not important, but Extract and Compress must always come first and last respectively. Also, StepTwo takes much longer to run than the rest of the steps.
The following are options I can think of for testing the class:
Only call the public Process method, and only check the result of one step in each test method (pro: clean, con: slow, because each test method calls StepTwo, which is slow, even though it doesn't care about the result of StepTwo)
Test the private steps directly using an accessor or wrapper (pro: simple, clear relation to what is run during a test and what is actually tested, con: still slow: extracts and compresses multiple times, hacky: need to use a private accessor or dynamic wrapper, or make the steps internal in order to access them)
Have only one test method that calls a bunch of smaller helper test methods (pro: fast, models the class more closely, con: violates "one assert per test method", would still need to run multiple times for different scenarios, ex. StepOne has different behavior based on input)
I'm a bit late to the discussion, but this is a Sergeant Method.
A quick google returns "We call the bigger method here as 'sergeant' method, which basically calls other private methods and marshaling them. It may have bits and pieces of code here and there. Each of these private methods is about one particular thing. This promotes cohesion and makes sergeant method read like comments".
As for how you can test it - your example presumably violates SRP because you have a zip compressor/decompressor (one thing) and then step1/2/3. You could extract the private method invocations into other classes and mock those for your test.
I disagree that Chain-of-Responsibility makes much sense here - the compressor shouldn't need to know about the decompressor (unless they're the same class) or the reason why it's doing the decompression. The processing classes (Step1/2/3) shouldn't care that the data they're working with was compressed before, etc.
The strategy pattern doesn't really make sense either - just because you can swap the implementation of extractZip or compressZip doesn't mean you have a strategy pattern.
There's not really a pattern reflected here, but you could rewrite your code to use the Strategy or Chain of Responsibility patterns, as pointed out Paul Michalik. As is, you basically just have a custom workflow defined for your application's needs. Using the Chain of Responsibility pattern, each step would be its own class which you could test independently. You may want to then write an integration test which ensures the whole process works end-to-end (component or acceptance level test).
Consider to split it into 2+ classes - ExtractZip/Compress and process or 1 class ExtractZip/Compress with delegate in the middle.
It's a Strategy. Depending on your testing scenario you can derive from ProcessorMock (or implement it if its an interface) and override the methods not relevant to test by proper stubs. However, usually, the more flexible pattern for such cases is Chain of Responsibility...