Using TDD first time in my life today. I am using nUnit.
I have one method, where I can insert multiple different inputs and check if result works.
I read that multiple asserts in one test is not a problem, and I really don't want to write new test for each input.
Example with multiple asserts:
[TestFixture]
public class TestClass
{
public Program test;
[SetUp]
public void Init()
{
test = new Program();
}
[Test]
public void Parse_SimpleValues_Calculated()
{
Assert.AreEqual(25, test.ParseCalculationString("5*5"));
Assert.AreEqual(125, test.ParseCalculationString("5*5*5"));
Assert.AreEqual(10, test.ParseCalculationString("5+5"));
Assert.AreEqual(15, test.ParseCalculationString("5+5+5"));
Assert.AreEqual(50, test.ParseCalculationString("5*5+5*5"));
Assert.AreEqual(3, test.ParseCalculationString("5-1*2"));
Assert.AreEqual(7, test.ParseCalculationString("7+1-1"));
}
}
But when something fails it is very hard to read which assert failed, I mean if you have them a lot, you have to go through all and find the right assert.
Is there any elegant way to show what input did we set if assert fails, instead of result and expected result?
Thank you.
I mean if you have them a lot, so you have to go through all.
No you don't - you just look at the stack trace. If you're running the tests within an IDE, I find that that's easily good enough to work out which line failed.
That said, there is a (significantly) better way - parameterized tests with TestCaseAttribute. So for example:
[Test]
[TestCase("5*5", 25)]
[TestCase("5*5*5", 125)]
[TestCase("5+5", 10)]
// etc
public void Parse_SimpleValues_Calculated(string input, int expectedOutput)
{
Assert.AreEqual(expectedOutput, test.ParseCalculationString(input));
}
Now your unit test runner will show you each test case separately, and you'll be able to see which one fails. Additionally, it will run all of the tests, even if an early one fails - so you don't end up fixing one only to find that the next one failed unexpectedly.
There's also TestCaseSourceAttribute for cases where you want to specify a collection of inputs separately - e.g. to use across multiple tests.
Related
I am trying to execute some code before all tests - even more specifically - even before all test case sources being evaluated.
The supposed-to-work-solution is not executing the code (breakpoint is being hit after test case sources were evaluated):
[SetUpFixture]
public class GlobalSetup
{
[OneTimeSetUp]
public void Setup()
{
// Unfortunately does stuff after TestCaseSources
}
}
I managed to achieve this by creating a fake test fixture with static constructor like this:
[TestFixture]
public class GlobalSetup
{
//Step 1
static GlobalSetup ()
{
// Does stuff before every other TestCaseSource
}
//Step 3
[Test, TestCaseSource (nameof (GlobalSetup.TestInitializerTestCaseData))]
public void Setup () { }
//Step 2
public static IEnumerable TestInitializerTestCaseData => new[] { new TestCaseData () };
}
What is the correct way to not use this workaround?
As the comments in the issue Pablo notPicasso cited indicate, you are trying to use NUnit in a way it isn't designed to be used. A global setup is possible, of course, but it runs before all other test execution. Test case sources are evaluated as a part of test discovery which has to be complete before any test execution can begin.
It's usually possible to use a global setup by reducing the amount of work done in your TestCaseSource. This is desirable anyway - TestCaseSource should always be as lightweight as possible.
Your example doesn't indicate exactly what you are trying to do before the tests are loaded. If you add more info, I can refine this answer. Meanwhile, here are a couple of examples...
If your TestCaseSource is creating instances of the objects you are testing don't do that. Instead, initialize parameters like strings and ints that will be used in the SetUp or OneTimeSetup for your tests.
If your TestCaseSource is initializing a database don't do that. Instead save the parameters that are needed to initialize the database and do it in a OneTimeSetUp at some level in the test hierarchy.
Hopefully, you get the idea... or provide some further info on what you are doing.
The workaround you have selected looks like it would work in the current implementation. However, it depends on the exact, unspecified order in which NUnit does things internally. It could easily stop working in a future release.
NUnit has a OneTimeSetup attribute. I am trying to think of some scenarios when I can use it. However, I cannot find any GitHub examples online (even my link does not have an example).
Say I had some code like this:
[TestFixture]
public class MyFixture
{
IProduct Product;
[OneTimeSetUp]
public void OneTimeSetUp()
{
IFixture fixture = new Fixture().Customize(new AutoMoqCustomization());
Product = fixture.Create<Product>();
}
[Test]
public void Test1()
{
//Use Product here
}
[Test]
public void Test2()
{
//Use Product here
}
[Test]
public void Test3()
{
//Use Product here
}
}
Is this a valid scenario for initialising a variable inside OneTimeSetup i.e. because in this case, Product is used by all of the Test methods?
The fixture setup for Moq seems reasonable to me.
However, reusing the same Product is suspect as it can be modified and accidentally reused which could lead to unstable tests. I would get a new Product each test run, that is if it's the system under test.
Yes. No. :-)
Yes... [OneTimeSetUp] works as you expect. It executes once, before all your tests and all your tests will use the same product.
No... You should probably not use it this way, assuming that the initialization of Product is not extremely costly - it likely is not since it's based on a mock.
OneTimeSetUp is best used only for extremely costly steps like creating a database. In fact, it tends to have very little use in purely unit testing.
You might think, "Well if it's even a little more efficient, why not use it all the time?" The answer is that you are writing tests, which implies you don't actually know what the code under test will do. One of your tests may change the common product, possibly even put it in an invalid state. Your other tests will start to fail, making it tricky to figure out the source of the problem. Almost invariably, you want to start out using SetUp, which will create a separate product for each one of your tests.
Yes, you use OneTimeSetUp (and OneTimeTearDown) to do all the setup that will be shared among tests in that fixture. It is run once per test run, so regardless of however many tests in the fixture the code will execute once. Typically you will initialize instances held in private fields etc for the other tests to use, so that you don't end up with lots of duplicate setup code.
With TestNG we have the "dependsOnMethods" feature that cheks if another TC has passed to execute the current TC and if it has a fail it does not execute unless you add alwaysRun label, as shown below:
#Test(dependsOnMethods={ "testMethod2" }, alwaysRun=true)
public void testMethod1() {
System.out.println("testMethod1");
}
#Test
public void testMethod2() {
System.out.println("testMethod2");
int result = 3;
Assert.assertEquals(result, 2);
}
Is there any way to have the same behaviour using NUnit?
Using NUnit's own facilties, there is no way to do this. There has been a lot of discussion around adding this sort of dependency, but it does not yet exist. Maybe TestNG is a good model for a future Attribute.
Currently, all you can do is order tests in NUnit. So, if you gave testMethod2 the attribute [Order(1)] it would run before any other tests in the fixture. This has some limitations:
Ordering has to do with starting tests, not waiting for them to finish. In a parallel environment, both tests could still run together. So, to use this workaround, you should not run the tests in the fixture in parallel. Fixtures can still run in parallel against one another, of course.
There is no provision that testMethod2 must pass in order for testMethod1 to be run. You could handle this yourself by setting an instance field in testMethod2 and testing it in testMethod1. I would probably test it using Assume.That so that test method 1 is not displayed as a warning or error in the case that test method 2 failed.
I have a FileExtractor class which a Start method which does some steps.
I've created a test class called "WhenExtractingInvalidFile.cs" within my folder called "FileExtractorTests" and added some Test Methods inside it as below which should be verified as steps of the Start() method:
[TestMethod]
public void Should_remove_original_file()
{
}
[TestMethod]
public void Should_add_original_file_to_errorStorage()
{
}
[TestMethod]
public void Should_log_error_locally()
{
}
This way, it'd nicely organize the behaviors and the expectations that should be met.
The problem is that most of the logic of these test methods are the same so should I be creating one test method that verifies all the steps or separately like above?
[TestMethod]
public void Should_remove_original_file_then_add_original_file_to_errorStorage_then_log_error_locally()
{
}
What's the best practice?
While it's commonly accepted that the Act section of tests should only contain one call, there's still a lot of debate over the "One Assert per Test" practice.
I tend to adhere to it because :
when a test fails, I immediately know (from the test name) which of the multiple things we want to verify on the method under test went wrong.
tests that imply mocking are already harder to read than regular tests, they can easily get arcane when you're asserting against multiple mocks in the same test.
If you don't follow it, I'd at least recommend that you include meaningful Assert messages in order to minimize head scratching when a test fails.
I would do the same thing you did, but with a _on_start postfix like this: Should_remove_original_file_on_start. The latter method will only give you a maximum of one assert fail, even though all aspects of Start could be broken.
DRY - don't repeat yourself.
Think about the scenario where your test fails. What would be most useful from a test organization point?
The second dimension to look at is maintenance. Do you want to run and maintain 1 test or n tests? Don't overburden development with writing many tests that have little value (I think TDD is a bit overrated for this reason). A test is more valuable if it exercises a larger path of the code rather than a short one.
In this particular scenario I would create a single test. If this test fails often and you are not reaching the root cause of the problem fast enough, re-factor into multiple tests.
OK so I have the following classes in C#:
class Program
{
static void Main(string[] args)
{
MyClass myClass = new MyClass("Hello World");
myClass.WriteToConsole();
}
}
class MyClass
{
private string MyProperty { get; set; }
public MyClass(string textToEncapsulate)
{
MyProperty = textToEncapsulate;
}
public void WriteToConsole()
{
Console.WriteLine(MyProperty);
}
}
Three questions:
What is unit testing?
Would unit testing be beneficial in the above example?
How would I go about 'Unit Testing' the above example?
Thanks
1. What is unit testing?
Manual testing is time consuming. It can be hard to run the exact same set of tests each time by hand to make sure that all parts of your code are functioning as expected. When testing a complete product by hand, it's also really hard to test all code paths.
How would you test the reaction of your code when a database is unavailable? Or when some erroneous data is stored? That would take quite some time to get right.
Unit Testing means that we start testing the smallest possible parts of our code. And to make sure we can do this easily we automate the process. This means that we write test code that tests our production code.
For example:
int a = 3;
int b = 5;
Calculator c = new Calculator();
int sum = c.Sum(a, b);
Assert.AreEqual(8, sum);
This tests assures that your Sum function on your Calculator class is working correctly.
Now, let's say that you want to optimize the inner workings of your Calculator class. You start changing and optimizing code. After each change you run your unit test and when they all succeed you know you haven't broken any code.
Let's say that in production a user submits a bug report for your Calculator. Your first step will be to write a unit test that shows this bug. After the new test is failing (because the bug is still there!) you fix the bug, the unit tests succeeds and you can be certain that this bug will never come back.
This safety harness is one of the biggest benefits of unit tests.
2 Would unit testing be beneficial in the above example? 3 How would I go about 'Unit Testing' the above example?
Unit Testing is a good practice. It helps you prove that your code is working. In your example however, it would be hard to test the code.
An output to the console is not something that can be easily tested. If however, you would abstract the idea of Console.WriteLine then your code becomes better testable.
Writing Unit Tests is actually quite simple. The problem is writing code that can actually be tested.
A better testable version of your code would be:
class Program
{
static void Main(string[] args)
{
MyClass myClass = new MyClass(new ConsoleOutputService(), "Hello World");
myClass.WriteToConsole();
}
}
public interface IOutputService
{
void WriteLine(string MyProperty);
}
public class ConsoleOutputService : IOutputService
{
public void WriteLine(string MyProperty)
{
Console.WriteLine(MyProperty);
}
}
class MyClass
{
private IOutputService _outputService;
private string MyProperty { get; set; }
public MyClass(IOutputService outputService, string textToEncapsulate)
{
_outputService = outputService;
MyProperty = textToEncapsulate;
}
public void WriteToConsole()
{
_outputService.WriteLine(MyProperty);
}
}
You have replaced your direct dependency on the Console with an interface. When unit testing this code, you could supply a fake for your IOutputService and check the outcome.
A really good book is xUnit Test Patterns. It shows the common pitfalls in writing unit test and patterns to avoid/fix them.
I also wrote a blog myself about testable code a couple of months ago. It's somewhat more advanced but maybe you can get something out of it. If you have any questions, feel free to ask.
The class you proposed is not so testable, since it works directly on the console. It would be different if you change teh method WriteToConsole(TextWriter out);. In this case you can mock the TextWriter and make some assertion that the class output to console exactly what you expect. The idea is that if you write testable code, you write better code, because working for testability make your code more reusable. Even if in your case making a unit test seems a little silly, having a test proving that simple behaviour works make you safer in case of further modification, done by you or others, that can potentially change as a side effect the expectations you have defined. Please note that I have proposed you to use a simple TextWriter as an additional parameter to make your class testable: it is my opinion you have to do the simplest effort to make your class testable, and since TextWriter is mockable you achieve the benefit of testing without rewriting your entire code, that is usually good.