Add Test Case to ITestSuiteBase in TFS API - c#

I'm working with the TFS API and have run into a problem with ITestSuiteBase and IRequirementTestSuite. I've mananged to easily create a new test case within a IStaticTestSuite:
IStaticTestSuite workingSuite = this.WorkingSuite as IStaticTestSuite;
testCase = CreateTestCase(this.TestProject, tci.Title, tci.Description);
workingSuite.Entries.Add(testCase);
this.Plan.Save();
However, this solution doesn't work for requirements test suites or ITestSuiteBase. The method that I would assume would work is:
ITestcase testCase = null;
testCase = CreateTestCase(this.TestProject, tci.Title, tci.Description);
this.WorkingSuite.AllTestCases.Add(testCase);
this.WorkingSuite.TestCases.Add(testCase);
this.Plan.Save();
But this method doesn't actually add the test case to the suite. It does, however, add the test case to the plan. I can query the created test case but it doesn't show up in the suite as expected - even immediately in the code afterwards. Refreshing the working suite has no benefit.
Additional code included below:
public static ITestCase CreateTestCase(ITestManagementTeamProject project, string title, string desc = "", TeamFoundationIdentity owner = null)
{
// Create a test case.
ITestCase testCase = project.TestCases.Create();
testCase.Owner = owner;
testCase.Title = title;
testCase.Description = desc;
testCase.Save();
return testCase;
}
Has anyone been able to successfully add a test case to a requirements test suite or a ITestSuiteBase?

Giulio's link proved to be the best way to do this
testCase = CreateTestCase(this.TestProject, tci.Title, tci.Description);
if (this.BaseWorkingSuite is IRequirementTestSuite)
TFS_API.AddTestCaseToRequirementSuite(this.BaseWorkingSuite as IRequirementTestSuite, testCase);
else if (this.BaseWorkingSuite is IStaticTestSuite)
(this.BaseWorkingSuite as IStaticTestSuite).Entries.Add(testCase);
this.Plan.Save();
And the important method:
public static void AddTestCaseToRequirementSuite(IRequirementTestSuite reqSuite, ITestCase testCase)
{
WorkItemStore store = reqSuite.Project.WitProject.Store;
WorkItem tfsRequirement = store.GetWorkItem(reqSuite.RequirementId);
tfsRequirement.Links.Add(new RelatedLink(store.WorkItemLinkTypes.LinkTypeEnds["Tested By"], testCase.WorkItem.Id));
tfsRequirement.Save();
reqSuite.Repopulate();
}

This is expected.
Static Test Suites are ... static while Requirement-based Test Suites are dynamic. The relationship between a Test Case and a Requirement is determined by the presence of a proper Tests/Tested By Work Item Link, so you need to add such a link.
For sample code see Not able to add test cases to type of IRequirementTestSuite.
Small note: you cannot duplicate links, so you may have to check for existence if the Test Case is not new.

Related

Testing validations C#

Below I've pasted my code. I'm validating a measure. I've written code that will read a Linux file. But if I wanted to pass multiple file names here would this be possible? so for example instead of my test just validating one file could I do a loop so it could ready multiple files in one go.
Once the file is being read and proceeded I return actualItemData. In my next method, I want to make a call to this actualItemData so the data is published in my var actual
public string validateMeasurement
{
var processFilePath = **"/orabin/app/oracle/inputs/ff/ff/actuals/xx_ss_x.csv.ovr";**
var actualItemData = Common.LinuxCommandExecutor.
RunLinuxcommand("cat " + processFilePath);
**return actualItemData;**
}
public void validateInventoryMeasurementValue(string Data, string itemStatus)
{
var expected = '6677,6677_6677,3001,6';
**var actual = actualItemData);**
Assert.AreEqual(expected, actual);
}
It looks like you are using msunit. As far as I know it doesn't support test cases. If you were to use nunit you would be able to do this using the TestCase attribute.
[TestCase("myfile1.txt", "6677,6677_6677,3001,6")]
[TestCase("myfile2.txt", "1,2,3")]
public void mytest(string path, string expected)
{
var actual = Common.LinuxCommandExecutor.
RunLinuxcommand("cat " + path);
Assert.AreEqual(expected, actual);
}
Generally you don't want to write unit tests that cross code boundaries (read files, hit the database, etc) as these tests tend to be brittle and difficult to maintain. I am not sure of the aim of your code but it appears you may be trying to parse the data to check it's validity. If this is the case you could write a series of tests to ensure that when your production code (parser) is given a string input, you get an output that matches your expectation. e.g.
[Test()]
public void Parse_GivenValidDataFromXX_S_X_CSV_ShouldReturnTrue(string filename)
{
// Arrange
var parser = CreateParser(); // factory function that returns your parser
// Act
var result = parser.Parse("6677,6677_6677,3001,6");
// Arrage
Assert.IsTrue(result);
}

how to change tester in test case (TFS API)

I'm trying to change tester of the test case in tfs api
in test-case manager i see this testers: https://gyazo.com/03adc434225c4c5541f602bc954feaed
i try to create and add TestPointAssignment with this tester:
IdAndName idAndName = new IdAndName(testSuite.Id, testSuite.Title);
var assignment = testSuite.CreateTestPointAssignment(testCase.Id, idAndName, Tester);
testSuite.AssignTestPoints(new List<ITestPointAssignment>() { assignment });
but nothing changes and remains the same tester.
how can i change tester in test-case with tfs api?
To change a Tester of a Test Case using TFS API, you could try the following code snippet:
string teamProjectName = "TeamProjectName";
TfsTeamProjectCollection tfsCollection = new TfsTeamProjectCollection(new Uri("http://serverName:8080/tfs/MyCollection"));
ITestManagementService testService = tfsCollection.GetService<ITestManagementService>();
ITestManagementTeamProject teamProject = testService.GetTeamProject(teamProjectName);
//get test point of a test case
ITestPlan tplan = teamProject.TestPlans.Find(testplanid);
ITestPoint point = tplan.QueryTestPoints("SELECT * FROM TestPoint WHERE TestCaseID = Testcaseid").FirstOrDefault();
IIdentityManagementService ims = tfsCollection.GetService<IIdentityManagementService>();
TeamFoundationIdentity tester = ims.ReadIdentity(IdentitySearchFactor.DisplayName, "Mike", MembershipQuery.Direct, ReadIdentityOptions.None);
//change tester for testcase
point.AssignedTo = tester;
point.Save();
I believe your issue is with idAndName.
CreateTestPointAssignment expects a list of ITestPointAssignment objects, where each object contains:
The Test Case Id
The Configuration
The TeamFoundationIdentity (user)
I believe it's failing because you're specifying the suite id and name, not a configuration id and name.
As you're probably aware, each tester gets assigned a Test Point, which is the intersection of a test case and a configuration. In MTM, you can see your configurations in Organize -> Test Configuration Manager. You'll see the ID and Name there, though in code, you'll probably want to query that list through the suite's DefaultConfigurations property. (Note that if it's empty, it means it's inheriting configurations from its parent or ancestor, and you may have to get the values from there.)

how to ignore a test result in NUnit? [duplicate]

We've got some integration tests in our solution. To run these tests, simulation software must be installed on the developer PC. This software is, however, not installed on every developer PC. If the simulation software is not installed, these tests should be skipped, otherwise ==> NullRefException.
I'm now seeking for a way to do a "conditional ignore" for tests/testfixtures.
Something like
if(simulationFilesExist)
do testfixture
else
skip testfixture
NUnit gives some useful things like ignore and explicit, but that's not quite what I need.
Use some code in your test or fixture set up method that detects if the simulation software is installed or not and calls Assert.Ignore() if it isn't.
[SetUp]
public void TestSetUp()
{
if (!TestHelper.SimulationFilesExist())
{
Assert.Ignore( "Simulation files are not installed. Omitting." );
}
}
or
[TestFixtureSetUp]
public void FixtureSetUp()
{
if (!TestHelper.SimulationFilesExist())
{
Assert.Ignore( "Simulation files are not installed. Omitting fixture." );
}
}
In NUnit 3.0 and higher you have to use OneTimeSetUp attribute instead of TestFixtureSetUp.
NUnit also gives you the option to supply a Category attribute.
Depending on how you are launching your tests, it may be appropriate to flag all the tests that require the simulator with a known category (e.g., [Category("RequiresSimulationSoftware")]). Then from the NUnit GUI you can choose to exclude certain categories. You can do the same thing from the NUnit command line runner (specify /exclude:RequiresSimulationSoftware if applicable).
I didn't want to duplicate Assert.Ignore condition in every test case, so I ended up using a custom Attribute class, which I derived from the NUnitAttribute class:
[AttributeUsage(AttributeTargets.Method, AllowMultiple = false, Inherited = false)]
public class SimulatorOnlyAttribute : NUnitAttribute, IApplyToTest
{
public void ApplyToTest(Test test)
{
if (test.RunState == RunState.NotRunnable)
{
return;
}
if (!Helper.RunsOnSimulator)
{
test.RunState = RunState.Ignored;
test.Properties.Set(PropertyNames.SkipReason, "This test should run only on simulator");
}
}
}
So now I can just mark required test cases with the new attribute:
[SimulatorOnly]
public void Test()
For reference you could investigate source code of the IgnoreAttribute.
Use:
[SetUp]
public void TestSetUp()
{
if (!TestHelper.SimulationFilesExist())
{
Assert.Ignore( "Simulation files are not installed. Omitting." );
}
}
You use this type of condition in TestFixtureSet Attribute. But if this fixture have a parameterized test then if you want to ignore parameterized test of this fixture then this goes in an infinite loop and your test will be hanged. So you use the setup attribute better for the if condition.
There are a lot of ways to alter the result status of a test. Here are a few, and ways to read out the various status:
TestExecutionContext.CurrentContext.CurrentTest.MakeInvalid("I want this test to be SKIPPED");
ResultState resultStateObject = new ResultState(TestStatus.Skipped);
TestExecutionContext.CurrentContext.CurrentResult.SetResult(resultStateObject, "this test is being skipped derp derp");
TestExecutionContext.CurrentContext.CurrentTest.RunState = RunState.Ignored;
Logger.log("After doing things");
resultstate = TestExecutionContext.CurrentContext.CurrentResult.ResultState.ToString();
Logger.log("%%%%%%%%%%%%%%%%%% Result State: " + resultstate);
resultstatestatus = TestExecutionContext.CurrentContext.CurrentResult.ResultState.Status.ToString();
Logger.log("%%%%%%%%%%%%%%%%%% Result State Status: " + resultstate);
runstate = TestExecutionContext.CurrentContext.CurrentTest.RunState.ToString();
Logger.log("%%%%%%%%%%%%%%%%%% Run State: " + runstate); //test="#runstate = 'Skipped' or #runstate = 'Ignored' or #runstate='Inconclusive'
status = TestContext.CurrentContext.Result.Outcome.Status.ToString();
Logger.log("%%%%%%%%%%%%%%%%%% Result Status: " + status);
message = TestExecutionContext.CurrentContext.CurrentResult.Message.ToString();
Logger.log("%%%%%%%%%%%%%%%%%% Message: " + message);

How to structure my unit test?

I am new to unit test and wondering how to start testing. The application I am currently working on, does not have any unit test. It a winform application and I am only interested to test the data layer of this application.
Here is an example.
public interface ICalculateSomething
{
SomeOutout1 CalculateSomething1(SomeInput1 input1);
SomeOutout2 CalculateOSomething2(SomeInput2 input2);
}
public class CalculateSomething : ICalculateSomething
{
SomeOutout1 ICalculateSomething.CalculateSomething1(SomeInput1 input1)
{
SomeOutout1.Prop1 = calculateFromInput1(input1.Prop1, input1.Prop2);
SomeOutout1.Prop3 = calculateFromInput2(input1.Prop3, input1.Prop4);
return SomeOutout1;
}
SomeOutout2 ICalculateSomething.CalculateOSomething2(SomeInput2 input2)
{
SomeOutout2.Prop1 = calculateFromInput1(input2.Prop1, input2.Prop2);
SomeOutout2.Prop3 = calculateFromInput2(input2.Prop3, input2.Prop4);
return SomeOutout2;
}
}
I would like to test these two methods in the CalculateSomething. Those methods implementation are long and complicated. How should I structure my test?
I don't see a reason for not using a straight-forward unit test implementation. I'd start with a basic test method:
[TestMethod]
public void CalculateSomething1_FooInput
{
var input = new SomeInput1("Foo");
var expected = new SomeOutput1(...);
var calc = new CalculateSomething(...);
var actual = calc.CalculateSomething1(input);
Assert.AreEqual(expected.Prop1, actual.Prop1);
Assert.AreEqual(expected.Prop2, actual.Prop2);
Assert.AreEqual(expected.Prop3, actual.Prop3);
}
And then, as you add CalculateSomething1_BarInput and CalculateSomething2_FooInput, factor out some common code into helper methods:
[TestMethod]
public void CalculateSomething1_FooInput
{
var input = new SomeInput1("Foo");
var expected = new SomeOutput1(...);
var actual = CreateTestCalculateSomething().CalculateSomething1(input);
AssertSomeOutput1Equality(expected, actual);
}
As far as unit testing is concerned you would have to create the test methods for the functions that you want.
[TestMethod()]
public void CalculateSomething1()
{
// First we have to define the input for the fucntion
var input = new SomeInput1(); // Assumes your constructor creates the value for prop1 and prop2. Change as needed.
var classToBeTested = new CalculateSomething();
var output = classToBeTested(input);
// There are multiple ways to test if the outcome is correct choose the one that is correct for the method/output.
Assert.IsNotNull(output);
}
The method above would be in a unit test project and associated class file.
Some things to keep in mind when unit testing
Unit tests need to be independent
Long complicated code should be re-factored down into smaller units of code and tested.
Interfaces are an awesome way to remove dependencies. The use of interfaces allows concepts such as mocking. Mocking can be a little complicated at first so take your time when learning it. There are several mocking frameworks out there that can help a lot. i.e. RhinoMocks, Moq just to name a couple.
Those are explicitly implemented properties, so you have to use an interface reference to test them.
var input1 = new SomeInput1();
// setup required data in input1.
ICalculateSomething calculator = new CalculateSomething();
var output = calculator.CalculateSomething1(input1);
// Have assert statements on the properties of output to verify the calculation.
Don't use var for calculator, because that will give you a CalculateSomething reference and the interface methods are hidden.

TDD workflow best-practices for .NET using NUnit

UPDATE: I made major changes to this post - check the revision history for details.
I'm starting to dive into TDD with NUnit and despite I've enjoyed checking some resources I've found here at stackoverflow, I often find myself not gaining good traction.
So what I'm really trying to achieve is to acquire some sort of checklist/workflow —and here's where I need you guys to help me out— or "Test Plan" that will give me decent Code Coverage.
So let's assume an ideal scenario where we could start a project from scratch with let's say a Mailer helper class that would have the following code:
(I've created the class just for the sake of aiding the question with a code sample so any criticism or advice is encouraged and will be very welcome)
Mailer.cs
using System.Net.Mail;
using System;
namespace Dotnet.Samples.NUnit
{
public class Mailer
{
readonly string from;
public string From { get { return from; } }
readonly string to;
public string To { get { return to; } }
readonly string subject;
public string Subject { get { return subject; } }
readonly string cc;
public string Cc { get { return cc; } }
readonly string bcc;
public string BCc { get { return bcc; } }
readonly string body;
public string Body { get { return body; } }
readonly string smtpHost;
public string SmtpHost { get { return smtpHost; } }
readonly string attachment;
public string Attachment { get { return Attachment; } }
public Mailer(string from = null, string to = null, string body = null, string subject = null, string cc = null, string bcc = null, string smtpHost = "localhost", string attachment = null)
{
this.from = from;
this.to = to;
this.subject = subject;
this.body = body;
this.cc = cc;
this.bcc = bcc;
this.smtpHost = smtpHost;
this.attachment = attachment;
}
public void SendMail()
{
if (string.IsNullOrEmpty(From))
throw new ArgumentNullException("Sender e-mail address cannot be null or empty.", from);
SmtpClient smtp = new SmtpClient();
MailMessage mail = new MailMessage();
smtp.Send(mail);
}
}
}
MailerTests.cs
using System;
using NUnit.Framework;
using FluentAssertions;
namespace Dotnet.Samples.NUnit
{
[TestFixture]
public class MailerTests
{
[Test, Ignore("No longer needed as the required code to pass has been already implemented.")]
public void SendMail_FromArgumentIsNotNullOrEmpty_ReturnsTrue()
{
// Arrange
dynamic argument = null;
// Act
Mailer mailer = new Mailer(from: argument);
// Assert
Assert.IsNotNullOrEmpty(mailer.From, "Parameter cannot be null or empty.");
}
[Test]
public void SendMail_FromArgumentIsNullOrEmpty_ThrowsException()
{
// Arrange
dynamic argument = null;
Mailer mailer = new Mailer(from: argument);
// Act
Action act = () => mailer.SendMail();
act.ShouldThrow<ArgumentNullException>();
// Assert
Assert.Throws<ArgumentNullException>(new TestDelegate(act));
}
[Test]
public void SendMail_FromArgumentIsOfTypeString_ReturnsTrue()
{
// Arrange
dynamic argument = String.Empty;
// Act
Mailer mailer = new Mailer(from: argument);
// Assert
mailer.From.Should().Be(argument, "Parameter should be of type string.");
}
// INFO: At this first 'iteration' I've almost covered the first argument of the method so logically this sample is nowhere near completed.
// TODO: Create a test that will eventually require the implementation of a method to validate a well-formed email address.
// TODO: Create as much tests as needed to give the remaining parameters good code coverage.
}
}
So after having my first 2 failing tests the next obvious step would be implementing the functionality to make them pass, but, should I keep the failing tests and create new ones after implementing the code that will make those pass, or should I modify the existing ones after making them pass?
Any advice about this topic will really be enormously appreciated.
If you install TestDriven.net, one of the components (called NCover) actually helps you understand how much of your code is covered by unit test.
Barring that, the best solution is to check each line, and run each test to make sure you've at least hit that line once.
I'd suggest that you pick up some tool like NCover which can hook onto your test cases to give code coverage stats. There is also a community edition of NCover if you don't want the licensed version.
If you use a framework like NUnit, there are methods available such as AssertThrows where you can assert that a method throws the required exception given the input: http://www.nunit.org/index.php?p=assertThrows&r=2.5
Basically, verifying expected behavior given good and bad inputs is the best place to start.
When people (finally!) decide to apply test coverage to an existing code base, it is impractical to test everything; you don't have the resources, and there isn't often a lot of real value.
What you ideally want to do is to make sure that your tests apply to newly written/modified code and anything that might be affected by those changes.
To do this, you need to know:
what code you changed. Your source control system will help you here at the level of this-file-changed.
what code is executed as a consequence of the new code being executed. For this you need either a static analyzer that can trace the downstream impact of the code (don't know of many of these) or a test coverage tool, which can show what has been executed when you run your specific tests. Any such executed code probably needs re-testing, too.
Because you want to minimize the the amount of test code you write, you clearly want better than file-precision granularity of "changed". You can use a diff tool (often build into your source control system) to help hone the focus to specific lines. Diff tools don't actually understand code structure, so what they report tends to be line-oriented rather than structure oriented, producing rather bigger diffs than necessary; nor do they tell you the convenient point of test access, which is likely to be a method because the whole style of unit test is focused on testing methods.
You can get better diff tools. Our Smart Differencer tools provide differences in terms of program structures (expressions, statements, methods) and abstracting editing operations (insert, delete, copy, move, replace, rename) which can make it easier to interpret the code changes. This doesn't directly solve the "which method changed?" question, but it often means looking at a lot less stuff to make that decision.
You can get test coverage tools that will answer this question. Our Test Coverage tools have a facility to compare previous test coverage runs with current test coverage runs, to tell you which tests have to be re-run. They do so by examining the code differences (something like the Smart Differencer) but abstract the changes back to method level.

Categories

Resources