how to ignore a test result in NUnit? [duplicate] - c#

We've got some integration tests in our solution. To run these tests, simulation software must be installed on the developer PC. This software is, however, not installed on every developer PC. If the simulation software is not installed, these tests should be skipped, otherwise ==> NullRefException.
I'm now seeking for a way to do a "conditional ignore" for tests/testfixtures.
Something like
if(simulationFilesExist)
do testfixture
else
skip testfixture
NUnit gives some useful things like ignore and explicit, but that's not quite what I need.

Use some code in your test or fixture set up method that detects if the simulation software is installed or not and calls Assert.Ignore() if it isn't.
[SetUp]
public void TestSetUp()
{
if (!TestHelper.SimulationFilesExist())
{
Assert.Ignore( "Simulation files are not installed. Omitting." );
}
}
or
[TestFixtureSetUp]
public void FixtureSetUp()
{
if (!TestHelper.SimulationFilesExist())
{
Assert.Ignore( "Simulation files are not installed. Omitting fixture." );
}
}
In NUnit 3.0 and higher you have to use OneTimeSetUp attribute instead of TestFixtureSetUp.

NUnit also gives you the option to supply a Category attribute.
Depending on how you are launching your tests, it may be appropriate to flag all the tests that require the simulator with a known category (e.g., [Category("RequiresSimulationSoftware")]). Then from the NUnit GUI you can choose to exclude certain categories. You can do the same thing from the NUnit command line runner (specify /exclude:RequiresSimulationSoftware if applicable).

I didn't want to duplicate Assert.Ignore condition in every test case, so I ended up using a custom Attribute class, which I derived from the NUnitAttribute class:
[AttributeUsage(AttributeTargets.Method, AllowMultiple = false, Inherited = false)]
public class SimulatorOnlyAttribute : NUnitAttribute, IApplyToTest
{
public void ApplyToTest(Test test)
{
if (test.RunState == RunState.NotRunnable)
{
return;
}
if (!Helper.RunsOnSimulator)
{
test.RunState = RunState.Ignored;
test.Properties.Set(PropertyNames.SkipReason, "This test should run only on simulator");
}
}
}
So now I can just mark required test cases with the new attribute:
[SimulatorOnly]
public void Test()
For reference you could investigate source code of the IgnoreAttribute.

Use:
[SetUp]
public void TestSetUp()
{
if (!TestHelper.SimulationFilesExist())
{
Assert.Ignore( "Simulation files are not installed. Omitting." );
}
}
You use this type of condition in TestFixtureSet Attribute. But if this fixture have a parameterized test then if you want to ignore parameterized test of this fixture then this goes in an infinite loop and your test will be hanged. So you use the setup attribute better for the if condition.

There are a lot of ways to alter the result status of a test. Here are a few, and ways to read out the various status:
TestExecutionContext.CurrentContext.CurrentTest.MakeInvalid("I want this test to be SKIPPED");
ResultState resultStateObject = new ResultState(TestStatus.Skipped);
TestExecutionContext.CurrentContext.CurrentResult.SetResult(resultStateObject, "this test is being skipped derp derp");
TestExecutionContext.CurrentContext.CurrentTest.RunState = RunState.Ignored;
Logger.log("After doing things");
resultstate = TestExecutionContext.CurrentContext.CurrentResult.ResultState.ToString();
Logger.log("%%%%%%%%%%%%%%%%%% Result State: " + resultstate);
resultstatestatus = TestExecutionContext.CurrentContext.CurrentResult.ResultState.Status.ToString();
Logger.log("%%%%%%%%%%%%%%%%%% Result State Status: " + resultstate);
runstate = TestExecutionContext.CurrentContext.CurrentTest.RunState.ToString();
Logger.log("%%%%%%%%%%%%%%%%%% Run State: " + runstate); //test="#runstate = 'Skipped' or #runstate = 'Ignored' or #runstate='Inconclusive'
status = TestContext.CurrentContext.Result.Outcome.Status.ToString();
Logger.log("%%%%%%%%%%%%%%%%%% Result Status: " + status);
message = TestExecutionContext.CurrentContext.CurrentResult.Message.ToString();
Logger.log("%%%%%%%%%%%%%%%%%% Message: " + message);

Related

Methods of the class inherited from ITestListener are not started. NUnit

I have an implementation of the listener interface and I want to save a screenshot of the web driver, as well as the xml of the test itself.
public class TestListener : ITestListener
{
public void TestFinished(ITestResult result)
{
try
{
string fileName;
if (!Directory.Exists("Logs")) Directory.CreateDirectory("Logs");
if (Directory.GetFiles(#"Logs\").Length == 0)
{
fileName = #"Logs\Log_0.xml";
}
else
{
fileName = #"Logs\Log_" + System.Convert.ToString(
Directory.GetFiles(#"Logs\").Select(item => System.Convert.ToInt32(item.Split('_')[1]))
.OrderByDescending(item => item).First() + 1) + ".xml";
}
using (StreamWriter fstream = new StreamWriter(fileName))
{
fstream.Write(result.ToXml(true).OuterXml);
}
}
catch {}
if (result.ResultState.Status == TestStatus.Failed)
{
try
{
Screenshot screenshot = ((ITakesScreenshot)Driver.DriverInstance.GetInstance()).GetScreenshot();
if (!Directory.Exists("Screenshots")) Directory.CreateDirectory("Screenshots");
if (Directory.GetFiles(#"Screenshots\").Length == 0)
screenshot.SaveAsFile(#"Screenshots\Screen_0.jpg");
else
{
string fileName = #"Screenshots\Screen_" + System.Convert.ToString(
Directory.GetFiles(#"Screenshots\").Select(item => System.Convert.ToInt32(item.Split('_')[1]))
.OrderByDescending(item => item).First() + 1) + ".jpg";
screenshot.SaveAsFile(fileName);
}
}
catch {}
}
}
....
}
and I run a illustration test to show to see that it works
[Test]
public void test()
{
Assert.IsTrue(false);
}
but don't push a event for listener.
NUnit v3.12.0
As you discovered, the NUnit Framework has an interface ITestListener, which it uses to communicate events about tests. However, it isn't something you can create in your tests because you have no way to cause NUnit to call your methods with that interface.
You have probably confused this interface with ITestEventListener, which is an interface provided by the NUnit engine. The engine, of course, is a different piece of software. It's the software, which runs tests written against the NUnit framework. The engine allows several types of extensions to be created and one of those is, in fact, a test event listener extension, implementing ITestEventListener.
However, you can't implement an extension by including its code in your tests. (See note) Extensions are implemented separately and must be installed into the engine you are using... i.e. the engine that came with your test runner.
I can't write a complete how-to here for implementing engine extensions. I'll refer you to the docs for that...
https://docs.nunit.org/articles/nunit-engine/extensions/creating-extensions/Event-Listeners.html
If you look at the index on that page, you'll also see there is info about both framework and engine extensibility. Once you have absorbed it, I suggest another question for any problems you may run into.
Note: I wish it were possible to just include the class in your tests and have it added as a sort of adhoc extension! However, there are architectural reasons why that's next to impossible... for one thing, your tests may be targeting a completely different runtime from the engine itself!

Why my code coverage is zero percent even unit test all passed?

I have created a project with just one method and I have written a unit test on it and the unit test passed locally. But not sure why after running sonar cloud scanner, it shows zero percent coverage.
This is the test class
public class DataStructureTest
{
private readonly DataStructure ds;
public DataStructureTest()
{
ds = new DataStructure();
}
[Theory, MemberData(nameof(LongestString_Return_Longest_String_ShouldPass_Data))]
public void LongestString_Return_Longest_String_ShouldPass(string input, string expect)
{
// Act
var actual = ds.LongestString(input);
// Assert
Assert.Equal(expect, actual);
}
public static TheoryData<string, string> LongestString_Return_Longest_String_ShouldPass_Data()
{
return new TheoryData<string, string>
{
{ "Hello John", "Hello" },
{ "Hi John and Mandy", "Mandy" }
};
}
}
You have to be careful about what these softwares mean when they use some term. For example, SonarQube has following article: https://community.sonarsource.com/t/sonarqube-and-code-coverage/4725
FAQ has this as first question:
Q: After migrating from 5.6 to 6.7 my coverage shows 0%, why is that ?
R: Since SonarQube 6.2 and the implementation of the MMF-345 565, if no coverage information is found the coverage is then set to zero by default.
I think your case may come under this.

How to structure my unit test?

I am new to unit test and wondering how to start testing. The application I am currently working on, does not have any unit test. It a winform application and I am only interested to test the data layer of this application.
Here is an example.
public interface ICalculateSomething
{
SomeOutout1 CalculateSomething1(SomeInput1 input1);
SomeOutout2 CalculateOSomething2(SomeInput2 input2);
}
public class CalculateSomething : ICalculateSomething
{
SomeOutout1 ICalculateSomething.CalculateSomething1(SomeInput1 input1)
{
SomeOutout1.Prop1 = calculateFromInput1(input1.Prop1, input1.Prop2);
SomeOutout1.Prop3 = calculateFromInput2(input1.Prop3, input1.Prop4);
return SomeOutout1;
}
SomeOutout2 ICalculateSomething.CalculateOSomething2(SomeInput2 input2)
{
SomeOutout2.Prop1 = calculateFromInput1(input2.Prop1, input2.Prop2);
SomeOutout2.Prop3 = calculateFromInput2(input2.Prop3, input2.Prop4);
return SomeOutout2;
}
}
I would like to test these two methods in the CalculateSomething. Those methods implementation are long and complicated. How should I structure my test?
I don't see a reason for not using a straight-forward unit test implementation. I'd start with a basic test method:
[TestMethod]
public void CalculateSomething1_FooInput
{
var input = new SomeInput1("Foo");
var expected = new SomeOutput1(...);
var calc = new CalculateSomething(...);
var actual = calc.CalculateSomething1(input);
Assert.AreEqual(expected.Prop1, actual.Prop1);
Assert.AreEqual(expected.Prop2, actual.Prop2);
Assert.AreEqual(expected.Prop3, actual.Prop3);
}
And then, as you add CalculateSomething1_BarInput and CalculateSomething2_FooInput, factor out some common code into helper methods:
[TestMethod]
public void CalculateSomething1_FooInput
{
var input = new SomeInput1("Foo");
var expected = new SomeOutput1(...);
var actual = CreateTestCalculateSomething().CalculateSomething1(input);
AssertSomeOutput1Equality(expected, actual);
}
As far as unit testing is concerned you would have to create the test methods for the functions that you want.
[TestMethod()]
public void CalculateSomething1()
{
// First we have to define the input for the fucntion
var input = new SomeInput1(); // Assumes your constructor creates the value for prop1 and prop2. Change as needed.
var classToBeTested = new CalculateSomething();
var output = classToBeTested(input);
// There are multiple ways to test if the outcome is correct choose the one that is correct for the method/output.
Assert.IsNotNull(output);
}
The method above would be in a unit test project and associated class file.
Some things to keep in mind when unit testing
Unit tests need to be independent
Long complicated code should be re-factored down into smaller units of code and tested.
Interfaces are an awesome way to remove dependencies. The use of interfaces allows concepts such as mocking. Mocking can be a little complicated at first so take your time when learning it. There are several mocking frameworks out there that can help a lot. i.e. RhinoMocks, Moq just to name a couple.
Those are explicitly implemented properties, so you have to use an interface reference to test them.
var input1 = new SomeInput1();
// setup required data in input1.
ICalculateSomething calculator = new CalculateSomething();
var output = calculator.CalculateSomething1(input1);
// Have assert statements on the properties of output to verify the calculation.
Don't use var for calculator, because that will give you a CalculateSomething reference and the interface methods are hidden.

Add Test Case to ITestSuiteBase in TFS API

I'm working with the TFS API and have run into a problem with ITestSuiteBase and IRequirementTestSuite. I've mananged to easily create a new test case within a IStaticTestSuite:
IStaticTestSuite workingSuite = this.WorkingSuite as IStaticTestSuite;
testCase = CreateTestCase(this.TestProject, tci.Title, tci.Description);
workingSuite.Entries.Add(testCase);
this.Plan.Save();
However, this solution doesn't work for requirements test suites or ITestSuiteBase. The method that I would assume would work is:
ITestcase testCase = null;
testCase = CreateTestCase(this.TestProject, tci.Title, tci.Description);
this.WorkingSuite.AllTestCases.Add(testCase);
this.WorkingSuite.TestCases.Add(testCase);
this.Plan.Save();
But this method doesn't actually add the test case to the suite. It does, however, add the test case to the plan. I can query the created test case but it doesn't show up in the suite as expected - even immediately in the code afterwards. Refreshing the working suite has no benefit.
Additional code included below:
public static ITestCase CreateTestCase(ITestManagementTeamProject project, string title, string desc = "", TeamFoundationIdentity owner = null)
{
// Create a test case.
ITestCase testCase = project.TestCases.Create();
testCase.Owner = owner;
testCase.Title = title;
testCase.Description = desc;
testCase.Save();
return testCase;
}
Has anyone been able to successfully add a test case to a requirements test suite or a ITestSuiteBase?
Giulio's link proved to be the best way to do this
testCase = CreateTestCase(this.TestProject, tci.Title, tci.Description);
if (this.BaseWorkingSuite is IRequirementTestSuite)
TFS_API.AddTestCaseToRequirementSuite(this.BaseWorkingSuite as IRequirementTestSuite, testCase);
else if (this.BaseWorkingSuite is IStaticTestSuite)
(this.BaseWorkingSuite as IStaticTestSuite).Entries.Add(testCase);
this.Plan.Save();
And the important method:
public static void AddTestCaseToRequirementSuite(IRequirementTestSuite reqSuite, ITestCase testCase)
{
WorkItemStore store = reqSuite.Project.WitProject.Store;
WorkItem tfsRequirement = store.GetWorkItem(reqSuite.RequirementId);
tfsRequirement.Links.Add(new RelatedLink(store.WorkItemLinkTypes.LinkTypeEnds["Tested By"], testCase.WorkItem.Id));
tfsRequirement.Save();
reqSuite.Repopulate();
}
This is expected.
Static Test Suites are ... static while Requirement-based Test Suites are dynamic. The relationship between a Test Case and a Requirement is determined by the presence of a proper Tests/Tested By Work Item Link, so you need to add such a link.
For sample code see Not able to add test cases to type of IRequirementTestSuite.
Small note: you cannot duplicate links, so you may have to check for existence if the Test Case is not new.

TDD workflow best-practices for .NET using NUnit

UPDATE: I made major changes to this post - check the revision history for details.
I'm starting to dive into TDD with NUnit and despite I've enjoyed checking some resources I've found here at stackoverflow, I often find myself not gaining good traction.
So what I'm really trying to achieve is to acquire some sort of checklist/workflow —and here's where I need you guys to help me out— or "Test Plan" that will give me decent Code Coverage.
So let's assume an ideal scenario where we could start a project from scratch with let's say a Mailer helper class that would have the following code:
(I've created the class just for the sake of aiding the question with a code sample so any criticism or advice is encouraged and will be very welcome)
Mailer.cs
using System.Net.Mail;
using System;
namespace Dotnet.Samples.NUnit
{
public class Mailer
{
readonly string from;
public string From { get { return from; } }
readonly string to;
public string To { get { return to; } }
readonly string subject;
public string Subject { get { return subject; } }
readonly string cc;
public string Cc { get { return cc; } }
readonly string bcc;
public string BCc { get { return bcc; } }
readonly string body;
public string Body { get { return body; } }
readonly string smtpHost;
public string SmtpHost { get { return smtpHost; } }
readonly string attachment;
public string Attachment { get { return Attachment; } }
public Mailer(string from = null, string to = null, string body = null, string subject = null, string cc = null, string bcc = null, string smtpHost = "localhost", string attachment = null)
{
this.from = from;
this.to = to;
this.subject = subject;
this.body = body;
this.cc = cc;
this.bcc = bcc;
this.smtpHost = smtpHost;
this.attachment = attachment;
}
public void SendMail()
{
if (string.IsNullOrEmpty(From))
throw new ArgumentNullException("Sender e-mail address cannot be null or empty.", from);
SmtpClient smtp = new SmtpClient();
MailMessage mail = new MailMessage();
smtp.Send(mail);
}
}
}
MailerTests.cs
using System;
using NUnit.Framework;
using FluentAssertions;
namespace Dotnet.Samples.NUnit
{
[TestFixture]
public class MailerTests
{
[Test, Ignore("No longer needed as the required code to pass has been already implemented.")]
public void SendMail_FromArgumentIsNotNullOrEmpty_ReturnsTrue()
{
// Arrange
dynamic argument = null;
// Act
Mailer mailer = new Mailer(from: argument);
// Assert
Assert.IsNotNullOrEmpty(mailer.From, "Parameter cannot be null or empty.");
}
[Test]
public void SendMail_FromArgumentIsNullOrEmpty_ThrowsException()
{
// Arrange
dynamic argument = null;
Mailer mailer = new Mailer(from: argument);
// Act
Action act = () => mailer.SendMail();
act.ShouldThrow<ArgumentNullException>();
// Assert
Assert.Throws<ArgumentNullException>(new TestDelegate(act));
}
[Test]
public void SendMail_FromArgumentIsOfTypeString_ReturnsTrue()
{
// Arrange
dynamic argument = String.Empty;
// Act
Mailer mailer = new Mailer(from: argument);
// Assert
mailer.From.Should().Be(argument, "Parameter should be of type string.");
}
// INFO: At this first 'iteration' I've almost covered the first argument of the method so logically this sample is nowhere near completed.
// TODO: Create a test that will eventually require the implementation of a method to validate a well-formed email address.
// TODO: Create as much tests as needed to give the remaining parameters good code coverage.
}
}
So after having my first 2 failing tests the next obvious step would be implementing the functionality to make them pass, but, should I keep the failing tests and create new ones after implementing the code that will make those pass, or should I modify the existing ones after making them pass?
Any advice about this topic will really be enormously appreciated.
If you install TestDriven.net, one of the components (called NCover) actually helps you understand how much of your code is covered by unit test.
Barring that, the best solution is to check each line, and run each test to make sure you've at least hit that line once.
I'd suggest that you pick up some tool like NCover which can hook onto your test cases to give code coverage stats. There is also a community edition of NCover if you don't want the licensed version.
If you use a framework like NUnit, there are methods available such as AssertThrows where you can assert that a method throws the required exception given the input: http://www.nunit.org/index.php?p=assertThrows&r=2.5
Basically, verifying expected behavior given good and bad inputs is the best place to start.
When people (finally!) decide to apply test coverage to an existing code base, it is impractical to test everything; you don't have the resources, and there isn't often a lot of real value.
What you ideally want to do is to make sure that your tests apply to newly written/modified code and anything that might be affected by those changes.
To do this, you need to know:
what code you changed. Your source control system will help you here at the level of this-file-changed.
what code is executed as a consequence of the new code being executed. For this you need either a static analyzer that can trace the downstream impact of the code (don't know of many of these) or a test coverage tool, which can show what has been executed when you run your specific tests. Any such executed code probably needs re-testing, too.
Because you want to minimize the the amount of test code you write, you clearly want better than file-precision granularity of "changed". You can use a diff tool (often build into your source control system) to help hone the focus to specific lines. Diff tools don't actually understand code structure, so what they report tends to be line-oriented rather than structure oriented, producing rather bigger diffs than necessary; nor do they tell you the convenient point of test access, which is likely to be a method because the whole style of unit test is focused on testing methods.
You can get better diff tools. Our Smart Differencer tools provide differences in terms of program structures (expressions, statements, methods) and abstracting editing operations (insert, delete, copy, move, replace, rename) which can make it easier to interpret the code changes. This doesn't directly solve the "which method changed?" question, but it often means looking at a lot less stuff to make that decision.
You can get test coverage tools that will answer this question. Our Test Coverage tools have a facility to compare previous test coverage runs with current test coverage runs, to tell you which tests have to be re-run. They do so by examining the code differences (something like the Smart Differencer) but abstract the changes back to method level.

Categories

Resources