I have a requirement to not run a test case during certain times in the month and I would like to ignore or skip the relevant test(s) when applicable. So far the only attribute I have found that makes sense is Assert.Ignore();. Unfortunately triggering this method throws an IgnoreException and as a consequence the NUnit TearDown will not be triggered. This is not ideal as I have code that I would like to execute after each test case irrespective of the test outcome. I was incorrect in my assumption please find comments below...
In an ideal world I wish I could just set the test outcome in my ignore / skipped part of the code, example:
else
{
//it's after the 25th of the month let's skip this test case
TestContext.CurrentContext.Result.Outcome == ResultState.Ignored;
}
But I understand that TestContext.CurrentContext.Result.Outcome is only a Get for good reason and it would most likely just open a can of worms with people just setting their tests as passed when they should not etc..
[Test]
[TestCase(TestName = "Test A")]
[Description("Test A will be testing something.")]
[Category("Regression Test Pack")]
public void Test_Case_1()
{
sessionVariables = new Session().SetupSession(uiTestCase: true);
if (DateTime.Now.Day <= 25)
{
//let's test something!
}
else
{
//it's after the 25th of the month let's skip this test case
Assert.Ignore();
}
}
[TearDown]
public void Cleanup()
{
sessionVariables.TeardownLogic(sessionVariables);
}
As you discovered, use of Assert.Ignore does not cause the teardown to be skipped. (If it did, that would be an NUnit bug, because teardown must always be run if setup has been run)
So, your choice of how to end the test depends on how you want the result to be shown...
Simply returning causes the test to pass, without any special message.
Assert.Pass does the same thing but allows you to include a message.
Assert.Ignore issues an "ignored" warning and allows you to specify a reason. The warning is propagated so that the result of the run as a whole is also "Warning".
Assert.Inconclusive gives an "inconclusive" result, which is intended to mean that the test could not be run due to some external factor. You may also specify the specific reason and the overall run result is not affected.
As an alternative to Assert.Inconclusive, you could use Assume.That with an appropriate test. For example, the following code would trigger the "inconclusive" result after the 25th of the month:
Assume.That(DateTime.Now.Day <= 25);
For what it's worth, the "Inconclusive" result is intended for exactly this kind of situation. It means that the test could not be run for reasons beyond it's control. "Ignored" results are intended to be treated as a problem to eventually be solved by the team, although many folks use it differently.
From the point of view of NUnit's design, Assume.That is the most natural way to produce the result. You would normally place it at the beginning of the test or the SetUp method, before any other code.
You can just return instead of doing an assert.
public void Test_Case_1()
{
sessionVariables = new Session().SetupSession(uiTestCase: true);
if (DateTime.Now.Day <= 25)
{
//let's test something!
}
else
{
//it's after the 25th of the month let's skip this test case
return;
}
}
Related
Most of my test methods first try two or three trivial operations which should raise an exception, and then begin the real work. In Java, I would write it like this:
#Test
public void TestSomething() {
try {
testedClass.testedMethod(null);
fail();
catch (IllegalArgumentException ex) {
// OK
}
// and now let's get to the point ...
// ...
}
I wanted to stick with this habit in C#, but it seems there is no way to force a test method to fail. I've been looking a round for a while, but with no luck. Have I missed something?
PS: I know the correct way of testing these situations is this:
[TestMethod]
[ExpectedException(ArgumentNullException)]
public void TestSomethingWithNull() {
testedClass.TestedMethod(null);
}
[TestMethod]
public void TestSomething() {
// now the non-trivial stuff...
}
...but, I don't like this. When I have, let's say, 6 test methods in my test class, and each of those tests should start with covering three trivial, one-line situations which should raise an exception, using this approach turns my 6 tests into 18. In a bigger application, this really pollutes my Test Explorer and makes the results more difficult to scan through.
And let's say I want to test a method, whose responsibility is to validate each property of a given instance of some class, and raise a ValidationException if any value is incorrect. That could be easily handled by one TestValidation() test, but with this approach, turns it into:
TestValidationWithNull();
TestValidationWithInvalidProperty1();
TestValidationWithInvalidProperty2();
TestValidationWithNullProperty2();
Imagine you have 20 properties... :)
Of course, if this is the only way to do it, I'll bite.
You can use Assert.Fail() or throw NotImplementedException (if method which you are testing is not implemented yet).
But for testing if code throws exception I suggest you to use ExpectedException attribute (if you are stick to MSTest) - test will fail, if exception will not be thrown.
You need
Assert.Fail("Optional Message");
or you can just throw an exception from inside the method
You should also check out the TestCase attribute and TestCaseSource in NUnit. These might greatly simplify your code & testing when you want to pass different parameters to a test.
The reason why it's advisable to have separate test methods for all the "trivial" stuff is because an Assert.Fail() early in the test method may hide later problems. As you say though, lots of trivial test methods gets unwieldy all too quickly.
Remember though that Asset.Equals (for NUnit at least) can compare arrays. So you can change your tests to something like the following and simulationously test many aspects of the item under test, yet have visibility of all results:
public void TestSomething()
{
var result1 = false;
try
{
testedClass.testedMethod(null);
result1 = true;
}
catch (IllegalArgumentException ex) { }
var result2 = SomeDetailedTest();
var expected = new Object[] { false, 42 };
var actual = new Object[] { result1, result2 };
Assert.AreEqual(expected, actual);
}
Using TDD first time in my life today. I am using nUnit.
I have one method, where I can insert multiple different inputs and check if result works.
I read that multiple asserts in one test is not a problem, and I really don't want to write new test for each input.
Example with multiple asserts:
[TestFixture]
public class TestClass
{
public Program test;
[SetUp]
public void Init()
{
test = new Program();
}
[Test]
public void Parse_SimpleValues_Calculated()
{
Assert.AreEqual(25, test.ParseCalculationString("5*5"));
Assert.AreEqual(125, test.ParseCalculationString("5*5*5"));
Assert.AreEqual(10, test.ParseCalculationString("5+5"));
Assert.AreEqual(15, test.ParseCalculationString("5+5+5"));
Assert.AreEqual(50, test.ParseCalculationString("5*5+5*5"));
Assert.AreEqual(3, test.ParseCalculationString("5-1*2"));
Assert.AreEqual(7, test.ParseCalculationString("7+1-1"));
}
}
But when something fails it is very hard to read which assert failed, I mean if you have them a lot, you have to go through all and find the right assert.
Is there any elegant way to show what input did we set if assert fails, instead of result and expected result?
Thank you.
I mean if you have them a lot, so you have to go through all.
No you don't - you just look at the stack trace. If you're running the tests within an IDE, I find that that's easily good enough to work out which line failed.
That said, there is a (significantly) better way - parameterized tests with TestCaseAttribute. So for example:
[Test]
[TestCase("5*5", 25)]
[TestCase("5*5*5", 125)]
[TestCase("5+5", 10)]
// etc
public void Parse_SimpleValues_Calculated(string input, int expectedOutput)
{
Assert.AreEqual(expectedOutput, test.ParseCalculationString(input));
}
Now your unit test runner will show you each test case separately, and you'll be able to see which one fails. Additionally, it will run all of the tests, even if an early one fails - so you don't end up fixing one only to find that the next one failed unexpectedly.
There's also TestCaseSourceAttribute for cases where you want to specify a collection of inputs separately - e.g. to use across multiple tests.
Standard behavior of assertion in a VS UnitTest project is to just tell me that a test has failed on a specific line.
Sometimes however it would be convenient, if I could break when such an assertion fails. Because just setting a breakpoint on that line breaks also for not failing testcases. If I could break on assertion failing, I could see immediately which testcase is failing.
How can I temporarily tell VS to break, when assertion fails?
E.g. I have many loops in the following way in my unit tests and want to know which iteration is failing:
foreach(var testcase in testcases)
{
Assert.AreEqual(testcase.ExpectedOutputData, FuncionUnderTest(testcase.InputData));
}
Just use a conditional breakpoint, and give it the logical negative of the assertion's check.
Personally, I think breakpoints are seriously underused as nothing more than a 'break at this point' tool- They're very versatile, and can easily be used instead of Console.WriteLine for showing debug output without actually modifying the code, as well as only breaking if an assertion would fail.
They do tend to cause a bit of a performance hit if overused this way though, but that's rarely a problem running unit tests.
Without having too much knowledge of how you're constructing your unit tests, it seems to me that if you are struggling to see which assertion is causing your unit test to fail, then perhaps you are testing too much in your individual tests? Unit tests should be as atomic as possible to negate against precisely this issue. If you can break your tests up, even down to having a single assertion within a single test, it will be much easier to work out which test is failing. I would certainly advocate that over writing breakpoint-specific code into your unit tests.
If you use nUnit you can have this that allows you to debug separately:
[TestCase(0)]
[TestCase(1)]
public void NunitTestCases(int expected)
{
Assert.AreEqual(expected,0);
}
I guess you can always do this though:
[Test]
public void BreakTest()
{
for (int i = 0; i < 2; i++)
{
bool condition = i == 0;
if(!condition)
Debugger.Break();
Assert.IsTrue(condition);
}
}
So, i'm new to unit testing, and even more so to test first development. Is it valid for me to have just a single assert.isTrue statement in my unit test where I pass in my method and a valid parameter, and compare it to the known good answer?
Method
public static string RemoveDash(string myNumber)
{
string cleanNumber = myNumber.Replace("-","");
return cleanNumber;
}
Test
[TestMethod()]
public void TestRemoveDash()
{
Assert.IsTrue(RemoveDash("50-00-0")=="50000");
}
That's pretty valid if it tests the functionality of your method, which it very much seems to be doing.
Might consider using Equals here instead, but it doesn't really matter. Also, I know this is a test example, but always make sure to test cases where the input is not what is expected as well as whatever other valid forms it can come in (this can be in the same test method or a different one depending on your preference)
Testers sometimes read our tests so I attempt to make them as readble as possible. I would prefer to use the following, rather than the single Assert:
[TestMethod()]
public void TestRemoveDash()
{
string expected = "50000";
string actual = RemoveDash("50-00-0");
Assert.AreEqual(expected,actual);
}
The only comment is to use Assert.AreEqual instead of Assert.IsTrue:
Assert.IsAreEqual("50000", RemoveDash("50-00-0"));
The reason for that is that if the test fail the error message you get is more descriptive of what was meant to happen and what actually did happen. A message that says "Expected value <50000> but was actually <50-00-0>" is a lot better than "Expected value to be true, but was false."
As a rule of thumb, whenever you find yourself wanting to use Assert.IsTrue, go through Assert methods and see if there is a better method to test your expectation (e.g. Assert.IsInstanceOfType, Assert.IsNotNull, etc).
This seems perfectly valid - however, why not include a few other tests in that method, along the same lines but testing that e.g. RemoveDash("-") == "" and RemoveDash("-5") == "5" etc?
I hope this doesn't come across as a stupid question but its something I have been wondering about. I wish to write unit test a method which contains some logic to check that certain values are not null.
public void MyMethod(string value1, string value2)
{
if(value1 != null)
{
//do something (throw exception)
}
if(value2 != null)
{
//do something (throw exception)
}
//rest of method
}
I want to test this by passing null values into the method. My question is should I create a unit test for each argument or can I create one unit test which checks what happens if I set value1 to null and then checks what happens if I set value2 to null.
i.e.
[TestMethod]
public void TestMyMethodShouldThrowExceptionIfValue1IsNull()
{
//test
}
[TestMethod]
public void TestMyMethodShouldThrowExceptionIfValue2IsNull()
{
//test
}
or
[TestMethod]
public void TestMyMethodWithNullValues()
{
//pass null for value1
//check
//pass null for value2
//check
}
Or does it make any difference? I think I read somewhere that you should limit yourself to one assert per unit test. Is this correct?
Thanks in advance
Zaps
You should write a unit test for each test case (assertion) to avoid Assertion Roulette.
The "ideal" unit test test one thing only in order to pinpoint errors exactly.
In practice, this is not nearly as important as most TDD proponents state, because tests don't fail frequently, and finding out which assert failed takes almost no time compared with the rest of the work involved in investigating and fixing the problem.
Doing extra work when writing the tests to save yourself work when it fails (which may never happen) is a form of YAGNI.
If having multiple methods is no extra work beyond typing more method declarations, you should do it, but if it leads to duplicated setup code, I see absolutely nothing wrong with testing several conditions in one test method.
If you are doing two tests in the same test method, your tests are not doing "unit-test".
For instance, what if the test for the first null value fails ?
If both tests are in the same test method, the second test will probably not be executed ; which means the test on the second null value depends on the test on the first null value.
On the other hand, if you have two separate test methods, you can test each case in perfect isolation.
Judging from the code of your MyMethod method, there is no link between the two conditions ; which means there shouldn't probably be any dependancy between the tests for those two conditions.
So : you should use two distinct tests.
To extrapolate to a non-technical thought.
Say you have a car, and you want to test the color.
Tests might be:
Car is red.
Car is blue.
Car is painted.
Now it might make sense to have "Painted" and "blue", but they really are different things. And if you test red and blue, it will always fail - or the failure would not make sense from an isolation standpoint.
Always test ONE thing at a time as you suggest, and many things comprise the test suite.