I have created a project with just one method and I have written a unit test on it and the unit test passed locally. But not sure why after running sonar cloud scanner, it shows zero percent coverage.
This is the test class
public class DataStructureTest
{
private readonly DataStructure ds;
public DataStructureTest()
{
ds = new DataStructure();
}
[Theory, MemberData(nameof(LongestString_Return_Longest_String_ShouldPass_Data))]
public void LongestString_Return_Longest_String_ShouldPass(string input, string expect)
{
// Act
var actual = ds.LongestString(input);
// Assert
Assert.Equal(expect, actual);
}
public static TheoryData<string, string> LongestString_Return_Longest_String_ShouldPass_Data()
{
return new TheoryData<string, string>
{
{ "Hello John", "Hello" },
{ "Hi John and Mandy", "Mandy" }
};
}
}
You have to be careful about what these softwares mean when they use some term. For example, SonarQube has following article: https://community.sonarsource.com/t/sonarqube-and-code-coverage/4725
FAQ has this as first question:
Q: After migrating from 5.6 to 6.7 my coverage shows 0%, why is that ?
R: Since SonarQube 6.2 and the implementation of the MMF-345 565, if no coverage information is found the coverage is then set to zero by default.
I think your case may come under this.
Related
I'm assuming this is an error on my part but I can't figure out why ReSharper dotcover is showing my test coverage of certain queries (and commands too) as 0%.
So I have a .NET Core CQRS API that is made up of a lot of EF Core LINQ. Below is a simple example of one of my queries's main execute method (I left out the DI constructor but I'm sure you git the idea):
public bool Execute(SelectIsReportRequested query)
{
var context = _clientDatabase.GetContext(query.DatabaseId);
var result = (from a in context.Assessments
join r in context.Registrations on a.AssessmentId equals r.AssessmentId
where a.PublicId == query.ResponseId
select r.ReportRequested).SingleOrDefault();
return result == 1;
}
Then I have the following test that mocks the various bits and runs the query:
[TestMethod]
public void It_should_return_true_if_a_report_has_been_requested_for_the_givenassessment()
{
const int assessmentId = 1;
var responseId = Guid.NewGuid();
var mockRepository = new Mock<ICViewClientContext>();
var assessments = new List<Assessments>
{
new Assessments { AssessmentId = assessmentId, PublicId = responseId },
};
var registrations = new List<Registrations>
{
new Registrations { AssessmentId = assessmentId, ReportRequested = 1 },
};
mockRepository.Setup(x => x.Registrations).Returns(registrations.AsDbSetMock().Object);
mockRepository.Setup(x => x.Assessments).Returns(assessments.AsDbSetMock().Object);
var mockClientDatabase = new Mock<IClientDatabase>();
mockClientDatabase.Setup(x => x.GetContext(1)).Returns(mockRepository.Object);
var query = new Queries.Assessments.SelectIsReportRequested(2, responseId);
var handler = new Queries.Assessments.SelectIsReportRequestedHandler(mockClientDatabase.Object);
var result = handler.Execute(query);
Assert.AreEqual(true, result);
}
The tests passes (and will also fail if I break the logic in the LINQ) or any other logic in the code.
However, running dotcover runs the test, passes it but says that none of it is covered.
I would love to know why because it's really driving me insane and worries me that I've done something completely wrong!
So I think through blind luck I have been able to solve my issue and wanted to post what I did just in case it helps anyone else.
Whilst trying to get the logs to submit to JetBrains I did the following:
In ReSharper | Options… | dotCover | General, disabled 'Use preloaded Unit Test runners'
Saved settings
Went back and enabled 'Use preloaded Unit Test runners'
Saved settings
Then I re-ran dotcover and suddenly all my test coverage was shown and all my test cover code highlighting was shown correctly.
I've sent a message back to JetBrains and if they give me any info as to why that solved it I'll post that too.
I had a similar issue when dotCover didn't recognize some of the unit tests.
I was able to resolve it by removing Microsoft.VisualStudio.QualityTools.UnitTestFramework.dll from Test Project references and installing MSTest.TestFramework and MSTest.TestAdapter nuget packages.
We've got some integration tests in our solution. To run these tests, simulation software must be installed on the developer PC. This software is, however, not installed on every developer PC. If the simulation software is not installed, these tests should be skipped, otherwise ==> NullRefException.
I'm now seeking for a way to do a "conditional ignore" for tests/testfixtures.
Something like
if(simulationFilesExist)
do testfixture
else
skip testfixture
NUnit gives some useful things like ignore and explicit, but that's not quite what I need.
Use some code in your test or fixture set up method that detects if the simulation software is installed or not and calls Assert.Ignore() if it isn't.
[SetUp]
public void TestSetUp()
{
if (!TestHelper.SimulationFilesExist())
{
Assert.Ignore( "Simulation files are not installed. Omitting." );
}
}
or
[TestFixtureSetUp]
public void FixtureSetUp()
{
if (!TestHelper.SimulationFilesExist())
{
Assert.Ignore( "Simulation files are not installed. Omitting fixture." );
}
}
In NUnit 3.0 and higher you have to use OneTimeSetUp attribute instead of TestFixtureSetUp.
NUnit also gives you the option to supply a Category attribute.
Depending on how you are launching your tests, it may be appropriate to flag all the tests that require the simulator with a known category (e.g., [Category("RequiresSimulationSoftware")]). Then from the NUnit GUI you can choose to exclude certain categories. You can do the same thing from the NUnit command line runner (specify /exclude:RequiresSimulationSoftware if applicable).
I didn't want to duplicate Assert.Ignore condition in every test case, so I ended up using a custom Attribute class, which I derived from the NUnitAttribute class:
[AttributeUsage(AttributeTargets.Method, AllowMultiple = false, Inherited = false)]
public class SimulatorOnlyAttribute : NUnitAttribute, IApplyToTest
{
public void ApplyToTest(Test test)
{
if (test.RunState == RunState.NotRunnable)
{
return;
}
if (!Helper.RunsOnSimulator)
{
test.RunState = RunState.Ignored;
test.Properties.Set(PropertyNames.SkipReason, "This test should run only on simulator");
}
}
}
So now I can just mark required test cases with the new attribute:
[SimulatorOnly]
public void Test()
For reference you could investigate source code of the IgnoreAttribute.
Use:
[SetUp]
public void TestSetUp()
{
if (!TestHelper.SimulationFilesExist())
{
Assert.Ignore( "Simulation files are not installed. Omitting." );
}
}
You use this type of condition in TestFixtureSet Attribute. But if this fixture have a parameterized test then if you want to ignore parameterized test of this fixture then this goes in an infinite loop and your test will be hanged. So you use the setup attribute better for the if condition.
There are a lot of ways to alter the result status of a test. Here are a few, and ways to read out the various status:
TestExecutionContext.CurrentContext.CurrentTest.MakeInvalid("I want this test to be SKIPPED");
ResultState resultStateObject = new ResultState(TestStatus.Skipped);
TestExecutionContext.CurrentContext.CurrentResult.SetResult(resultStateObject, "this test is being skipped derp derp");
TestExecutionContext.CurrentContext.CurrentTest.RunState = RunState.Ignored;
Logger.log("After doing things");
resultstate = TestExecutionContext.CurrentContext.CurrentResult.ResultState.ToString();
Logger.log("%%%%%%%%%%%%%%%%%% Result State: " + resultstate);
resultstatestatus = TestExecutionContext.CurrentContext.CurrentResult.ResultState.Status.ToString();
Logger.log("%%%%%%%%%%%%%%%%%% Result State Status: " + resultstate);
runstate = TestExecutionContext.CurrentContext.CurrentTest.RunState.ToString();
Logger.log("%%%%%%%%%%%%%%%%%% Run State: " + runstate); //test="#runstate = 'Skipped' or #runstate = 'Ignored' or #runstate='Inconclusive'
status = TestContext.CurrentContext.Result.Outcome.Status.ToString();
Logger.log("%%%%%%%%%%%%%%%%%% Result Status: " + status);
message = TestExecutionContext.CurrentContext.CurrentResult.Message.ToString();
Logger.log("%%%%%%%%%%%%%%%%%% Message: " + message);
I am new to unit test and wondering how to start testing. The application I am currently working on, does not have any unit test. It a winform application and I am only interested to test the data layer of this application.
Here is an example.
public interface ICalculateSomething
{
SomeOutout1 CalculateSomething1(SomeInput1 input1);
SomeOutout2 CalculateOSomething2(SomeInput2 input2);
}
public class CalculateSomething : ICalculateSomething
{
SomeOutout1 ICalculateSomething.CalculateSomething1(SomeInput1 input1)
{
SomeOutout1.Prop1 = calculateFromInput1(input1.Prop1, input1.Prop2);
SomeOutout1.Prop3 = calculateFromInput2(input1.Prop3, input1.Prop4);
return SomeOutout1;
}
SomeOutout2 ICalculateSomething.CalculateOSomething2(SomeInput2 input2)
{
SomeOutout2.Prop1 = calculateFromInput1(input2.Prop1, input2.Prop2);
SomeOutout2.Prop3 = calculateFromInput2(input2.Prop3, input2.Prop4);
return SomeOutout2;
}
}
I would like to test these two methods in the CalculateSomething. Those methods implementation are long and complicated. How should I structure my test?
I don't see a reason for not using a straight-forward unit test implementation. I'd start with a basic test method:
[TestMethod]
public void CalculateSomething1_FooInput
{
var input = new SomeInput1("Foo");
var expected = new SomeOutput1(...);
var calc = new CalculateSomething(...);
var actual = calc.CalculateSomething1(input);
Assert.AreEqual(expected.Prop1, actual.Prop1);
Assert.AreEqual(expected.Prop2, actual.Prop2);
Assert.AreEqual(expected.Prop3, actual.Prop3);
}
And then, as you add CalculateSomething1_BarInput and CalculateSomething2_FooInput, factor out some common code into helper methods:
[TestMethod]
public void CalculateSomething1_FooInput
{
var input = new SomeInput1("Foo");
var expected = new SomeOutput1(...);
var actual = CreateTestCalculateSomething().CalculateSomething1(input);
AssertSomeOutput1Equality(expected, actual);
}
As far as unit testing is concerned you would have to create the test methods for the functions that you want.
[TestMethod()]
public void CalculateSomething1()
{
// First we have to define the input for the fucntion
var input = new SomeInput1(); // Assumes your constructor creates the value for prop1 and prop2. Change as needed.
var classToBeTested = new CalculateSomething();
var output = classToBeTested(input);
// There are multiple ways to test if the outcome is correct choose the one that is correct for the method/output.
Assert.IsNotNull(output);
}
The method above would be in a unit test project and associated class file.
Some things to keep in mind when unit testing
Unit tests need to be independent
Long complicated code should be re-factored down into smaller units of code and tested.
Interfaces are an awesome way to remove dependencies. The use of interfaces allows concepts such as mocking. Mocking can be a little complicated at first so take your time when learning it. There are several mocking frameworks out there that can help a lot. i.e. RhinoMocks, Moq just to name a couple.
Those are explicitly implemented properties, so you have to use an interface reference to test them.
var input1 = new SomeInput1();
// setup required data in input1.
ICalculateSomething calculator = new CalculateSomething();
var output = calculator.CalculateSomething1(input1);
// Have assert statements on the properties of output to verify the calculation.
Don't use var for calculator, because that will give you a CalculateSomething reference and the interface methods are hidden.
UPDATE: I made major changes to this post - check the revision history for details.
I'm starting to dive into TDD with NUnit and despite I've enjoyed checking some resources I've found here at stackoverflow, I often find myself not gaining good traction.
So what I'm really trying to achieve is to acquire some sort of checklist/workflow —and here's where I need you guys to help me out— or "Test Plan" that will give me decent Code Coverage.
So let's assume an ideal scenario where we could start a project from scratch with let's say a Mailer helper class that would have the following code:
(I've created the class just for the sake of aiding the question with a code sample so any criticism or advice is encouraged and will be very welcome)
Mailer.cs
using System.Net.Mail;
using System;
namespace Dotnet.Samples.NUnit
{
public class Mailer
{
readonly string from;
public string From { get { return from; } }
readonly string to;
public string To { get { return to; } }
readonly string subject;
public string Subject { get { return subject; } }
readonly string cc;
public string Cc { get { return cc; } }
readonly string bcc;
public string BCc { get { return bcc; } }
readonly string body;
public string Body { get { return body; } }
readonly string smtpHost;
public string SmtpHost { get { return smtpHost; } }
readonly string attachment;
public string Attachment { get { return Attachment; } }
public Mailer(string from = null, string to = null, string body = null, string subject = null, string cc = null, string bcc = null, string smtpHost = "localhost", string attachment = null)
{
this.from = from;
this.to = to;
this.subject = subject;
this.body = body;
this.cc = cc;
this.bcc = bcc;
this.smtpHost = smtpHost;
this.attachment = attachment;
}
public void SendMail()
{
if (string.IsNullOrEmpty(From))
throw new ArgumentNullException("Sender e-mail address cannot be null or empty.", from);
SmtpClient smtp = new SmtpClient();
MailMessage mail = new MailMessage();
smtp.Send(mail);
}
}
}
MailerTests.cs
using System;
using NUnit.Framework;
using FluentAssertions;
namespace Dotnet.Samples.NUnit
{
[TestFixture]
public class MailerTests
{
[Test, Ignore("No longer needed as the required code to pass has been already implemented.")]
public void SendMail_FromArgumentIsNotNullOrEmpty_ReturnsTrue()
{
// Arrange
dynamic argument = null;
// Act
Mailer mailer = new Mailer(from: argument);
// Assert
Assert.IsNotNullOrEmpty(mailer.From, "Parameter cannot be null or empty.");
}
[Test]
public void SendMail_FromArgumentIsNullOrEmpty_ThrowsException()
{
// Arrange
dynamic argument = null;
Mailer mailer = new Mailer(from: argument);
// Act
Action act = () => mailer.SendMail();
act.ShouldThrow<ArgumentNullException>();
// Assert
Assert.Throws<ArgumentNullException>(new TestDelegate(act));
}
[Test]
public void SendMail_FromArgumentIsOfTypeString_ReturnsTrue()
{
// Arrange
dynamic argument = String.Empty;
// Act
Mailer mailer = new Mailer(from: argument);
// Assert
mailer.From.Should().Be(argument, "Parameter should be of type string.");
}
// INFO: At this first 'iteration' I've almost covered the first argument of the method so logically this sample is nowhere near completed.
// TODO: Create a test that will eventually require the implementation of a method to validate a well-formed email address.
// TODO: Create as much tests as needed to give the remaining parameters good code coverage.
}
}
So after having my first 2 failing tests the next obvious step would be implementing the functionality to make them pass, but, should I keep the failing tests and create new ones after implementing the code that will make those pass, or should I modify the existing ones after making them pass?
Any advice about this topic will really be enormously appreciated.
If you install TestDriven.net, one of the components (called NCover) actually helps you understand how much of your code is covered by unit test.
Barring that, the best solution is to check each line, and run each test to make sure you've at least hit that line once.
I'd suggest that you pick up some tool like NCover which can hook onto your test cases to give code coverage stats. There is also a community edition of NCover if you don't want the licensed version.
If you use a framework like NUnit, there are methods available such as AssertThrows where you can assert that a method throws the required exception given the input: http://www.nunit.org/index.php?p=assertThrows&r=2.5
Basically, verifying expected behavior given good and bad inputs is the best place to start.
When people (finally!) decide to apply test coverage to an existing code base, it is impractical to test everything; you don't have the resources, and there isn't often a lot of real value.
What you ideally want to do is to make sure that your tests apply to newly written/modified code and anything that might be affected by those changes.
To do this, you need to know:
what code you changed. Your source control system will help you here at the level of this-file-changed.
what code is executed as a consequence of the new code being executed. For this you need either a static analyzer that can trace the downstream impact of the code (don't know of many of these) or a test coverage tool, which can show what has been executed when you run your specific tests. Any such executed code probably needs re-testing, too.
Because you want to minimize the the amount of test code you write, you clearly want better than file-precision granularity of "changed". You can use a diff tool (often build into your source control system) to help hone the focus to specific lines. Diff tools don't actually understand code structure, so what they report tends to be line-oriented rather than structure oriented, producing rather bigger diffs than necessary; nor do they tell you the convenient point of test access, which is likely to be a method because the whole style of unit test is focused on testing methods.
You can get better diff tools. Our Smart Differencer tools provide differences in terms of program structures (expressions, statements, methods) and abstracting editing operations (insert, delete, copy, move, replace, rename) which can make it easier to interpret the code changes. This doesn't directly solve the "which method changed?" question, but it often means looking at a lot less stuff to make that decision.
You can get test coverage tools that will answer this question. Our Test Coverage tools have a facility to compare previous test coverage runs with current test coverage runs, to tell you which tests have to be re-run. They do so by examining the code differences (something like the Smart Differencer) but abstract the changes back to method level.
I find the TestCase feature in NUnit quite useful as a quick way to specify test parameters without needing a separate method for each test. Is there anything similar in MSTest?
[TestFixture]
public class StringFormatUtilsTest
{
[TestCase("tttt", "")]
[TestCase("", "")]
[TestCase("t3a4b5", "345")]
[TestCase("3&5*", "35")]
[TestCase("123", "123")]
public void StripNonNumeric(string before, string expected)
{
string actual = FormatUtils.StripNonNumeric(before);
Assert.AreEqual(expected, actual);
}
}
Microsoft recently announced "MSTest V2" (see blog-article). This allows you to consistently (desktop, UWP, ...) use the DataRow-attribute!
[TestClass]
public class StringFormatUtilsTest
{
[DataTestMethod]
[DataRow("tttt", "")]
[DataRow("", "")]
[DataRow("t3a4b5", "345")]
[DataRow("3&5*", "35")]
[DataRow("123", "123")]
public void StripNonNumeric(string before, string expected)
{
string actual = FormatUtils.StripNonNumeric(before);
Assert.AreEqual(expected, actual);
}
}
Again, Visual Studio Express' Test Explorer unfortunately doesn't recognize these tests. But at least the "full" VS versions now support that feature!
To use it, just install the NuGet packages MSTest.TestFramework and MSTest.TestAdapter (both pre-release as of now).
Older answer:
If don't have to stick with MSTest and you're just using it for being able to run the tests via Test Explorer because you only have a Visual Studio Express edition, then this might be a solution for you:
There's the VsTestAdapter VSIX extension for being able to run NUnit tests via Test Explorer. Unfortunately, VS Express users can't install extensions...
But fortunately the VsTestAdapter comes with a plain NuGet-Package, too!
So, if you're a VS Express user, just install the VsTestAdapter NuGet-Package and enjoy running your NUnit tests/testcases via Test Explorer!
Unfortunately the aforementioned statement isn't true. While it's perfectly possible to install the package via an Express edition, it's useless, since it can't utilize the Test Explorer. There's previously been a side note on an older version of the TestAdapter, which was removed from the 2.0.0's description page:
Note that it doesn't work with VS Express
I know this is a late answer but hopefully it helps others out.
I looked everywhere for an elegant solution and ended up writing one myself. We use it in over 20 projects with thousands of unit tests and hundreds of thousands of iterations. Never once missed a beat.
https://github.com/Thwaitesy/MSTestHacks
1) Install the NuGet package.
2) Inherit your test class from TestBase
public class UnitTest1 : TestBase
{ }
3) Create a Property, Field or Method, that returns IEnumerable
[TestClass]
public class UnitTest1 : TestBase
{
private IEnumerable<int> Stuff
{
get
{
//This could do anything, get a dynamic list from anywhere....
return new List<int> { 1, 2, 3 };
}
}
}
4) Add the MSTest DataSource attribute to your test method, pointing back to the IEnumerable name above. This needs to be fully qualified.
[TestMethod]
[DataSource("Namespace.UnitTest1.Stuff")]
public void TestMethod1()
{
var number = this.TestContext.GetRuntimeDataSourceObject<int>();
Assert.IsNotNull(number);
}
End Result: 3 iterations just like the normal DataSource :)
using Microsoft.VisualStudio.TestTools.UnitTesting;
using MSTestHacks;
namespace Namespace
{
[TestClass]
public class UnitTest1 : TestBase
{
private IEnumerable<int> Stuff
{
get
{
//This could do anything, get a dynamic list from anywhere....
return new List<int> { 1, 2, 3 };
}
}
[TestMethod]
[DataSource("Namespace.UnitTest1.Stuff")]
public void TestMethod1()
{
var number = this.TestContext.GetRuntimeDataSourceObject<int>();
Assert.IsNotNull(number);
}
}
}
I know this is another late answer, but on my team that is locked into using the MS Test framework, we developed a technique that relies only on Anonymous Types to hold an array of test data, and LINQ to loop through and test each row. It requires no additional classes or frameworks, and tends to be fairly easy to read and understand. It's also much easier to implement than the data-driven tests using external files or a connected database.
For example, say you have an extension method like this:
public static class Extensions
{
/// <summary>
/// Get the Qtr with optional offset to add or subtract quarters
/// </summary>
public static int GetQuarterNumber(this DateTime parmDate, int offset = 0)
{
return (int)Math.Ceiling(parmDate.AddMonths(offset * 3).Month / 3m);
}
}
You could use and array of Anonymous Types combined to LINQ to write a tests like this:
[TestMethod]
public void MonthReturnsProperQuarterWithOffset()
{
// Arrange
var values = new[] {
new { inputDate = new DateTime(2013, 1, 1), offset = 1, expectedQuarter = 2},
new { inputDate = new DateTime(2013, 1, 1), offset = -1, expectedQuarter = 4},
new { inputDate = new DateTime(2013, 4, 1), offset = 1, expectedQuarter = 3},
new { inputDate = new DateTime(2013, 4, 1), offset = -1, expectedQuarter = 1},
new { inputDate = new DateTime(2013, 7, 1), offset = 1, expectedQuarter = 4},
new { inputDate = new DateTime(2013, 7, 1), offset = -1, expectedQuarter = 2},
new { inputDate = new DateTime(2013, 10, 1), offset = 1, expectedQuarter = 1},
new { inputDate = new DateTime(2013, 10, 1), offset = -1, expectedQuarter = 3}
// Could add as many rows as you want, or extract to a private method that
// builds the array of data
};
values.ToList().ForEach(val =>
{
// Act
int actualQuarter = val.inputDate.GetQuarterNumber(val.offset);
// Assert
Assert.AreEqual(val.expectedQuarter, actualQuarter,
"Failed for inputDate={0}, offset={1} and expectedQuarter={2}.", val.inputDate, val.offset, val.expectedQuarter);
});
}
}
When using this technique it's helpful to use a formatted message that includes the input data in the Assert to help you identify which row causes the test to fail.
I've blogged about this solution with more background and detail at AgileCoder.net.
Khlr gave a good detailed explanations and apparently this approach started working in VS2015 Express for Desktop.
I tried to leave the comment, but my lack of reputation didn't allow me to do so.
Let me copy the solution here:
[TestClass]
public class StringFormatUtilsTest
{
[TestMethod]
[DataRow("tttt", "")]
[DataRow("", "")]
[DataRow("t3a4b5", "345")]
[DataRow("3&5*", "35")]
[DataRow("123", "123")]
public void StripNonNumeric(string before, string expected)
{
string actual = FormatUtils.StripNonNumeric(before);
Assert.AreEqual(expected, actual);
}
}
To use it, just install the NuGet packages MSTest.TestFramework and MSTest.TestAdapter.
One problem is
Error CS0433 The type 'TestClassAttribute' exists in both
'Microsoft.VisualStudio.QualityTools.UnitTestFramework,
Version=10.0.0.0 and
'Microsoft.VisualStudio.TestPlatform.TestFramework, Version=14.0.0.0
So, please remove Microsoft.VisualStudio.QualityTools.UnitTestFramework from references of the project.
You're very welcome to edit the original reply and delete this one.
Consider using DynamicDataAttribute:
NUnit Test cases
private static readonly IEnumerable<TestCaseData> _testCases = new[]
{
new TestCaseData("input value 1").Returns(new NameValueCollection { { "a", "b" } }),
new TestCaseData("input value 2").Returns(new NameValueCollection { { "a", "b" } }),
/* .. */
};
[TestCaseSource(nameof(_testCases))]
public NameValueCollection test_test(string str)
{
var collection = new NameValueCollection();
collection.TestedMethod(str);
return collection;
}
MSTest Test cases
private static IEnumerable<object[]> _testCases
{
get
{
return new[]
{
new object[] { "input value 1", new NameValueCollection { { "a", "b" } } },
new object[] { "input value 2", new NameValueCollection { { "a", "b" } } },
/* .. */
};
}
}
[TestMethod]
[DynamicData(nameof(_testCases))]
public void test_test(string str, NameValueCollection expectedResult)
{
var collection = new NameValueCollection();
collection.TestedMethod(str);
CollectionAssert.AreEqual(expectedResult, collection);
}
MSTest has the DataSource attribute, which will allow you to feed it a database table, csv, xml, etc. I've used it and it works well. I don't know of a way to put the data right above as attributes as in your question, but it's very easy to set up the external data sources and files can be included in the project. I had it running an hour from when I started, and I'm not an automated test expert.
https://msdn.microsoft.com/en-us/library/ms182527.aspx?f=255&MSPPError=-2147217396 has a full tutorial based on database input.
http://www.rhyous.com/2015/05/11/row-tests-or-paramerterized-tests-mstest-xml/ has a tutorial based on XML file input.