How would I achieve a range based test in MSTest [duplicate] - c#

This question already has answers here:
How can we run a test method with multiple parameters in MSTest?
(9 answers)
Closed 6 years ago.
Using MSTest I want to run a test like this ...
var range = Enumerable.Range(0, 9);
foreach(var i in range)
{
Test(i);
}
... one theory I had was to create a new Test attribute like this ...
[TestClass]
public class CubeTests
{
[TestMethod]
[TestRange(0, 9)]
public void Test(int i)
{
// Test stuff
}
}
...
The key here is that I have some quite memory intensive code that I would like MSTest to clean up between tests for me.
For something so simple I really don't want to be relying on files and using Datasource and Deployment items.
Can this be done, if so, is anyone prepared to offer up an idea of how?

Maybe that's what you're looking for. Some years ago, Microsoft made ​​available an extension for visual studio called PEX.
PEX generate unit tests from a single parametric test, Pex finds
interesting input-output values of your methods, which you can save as
a small test suite with high code coverage.
You can use assumption and precondition for the parameters of your test, which ensure the best control of the tests generation.
Pex is no longer available(it was a research project), but is now available instead Intellitest, which still uses the same static analysis engine.
Intellitest generates a parameterized test that is modifiable and
general/global assertions can be added there. It also generates the
minimum number of inputs that maximize the code coverage; stores the
inputs as individual unit tests, each one calling the parameterized
test with a crafted input.
[PexMethod]
public void Test(int i)
{
PexAssume.IsTrue(i >= 0);
PexAssume.IsTrue(i < 10);
// Test stuff
}

You dont have to resort to built-in test runner magic. Simply add your range as a property of your test class:
private static IEnumerable<int> TestRange
{
get
{
int i = 0;
while(i < 10)
yield return i++;
}
}
now in your testmethod, you can do the for-loop as usual, using your uniquely defined testrange:
[TestMethod]
public void DoStuff_RangeIsValid_NoExceptions(){
// Act
foreach(var i in TestRange){
// do the unit test here
}
}

Implement test in separate method and call method from Test method
I think you can do this in NUnit, but I am pretty sure you can't do it in MS test.
If you want to do clean up then you can call the GC after every call, or create a TestCleanUpImpl method ( did this in snippet calling GC.Collect() to show how to force GC ).
Would suggest something like the following:
public void TestSetup()
{
//Setup tests
}
public void TestCleanUpImpl()
{
//unassign variables
//dispose disposable object
GC.Collect();
}
public void TestImpl(int i)
{
// Test stuff
// Do assert statements here
}
[TestMethod]
public void Test()
{
int fromNum = 0;
int untilNum = 9;
for(int i=fromNum;i<=untilNum;i++)
{
TestSetup();
TestImpl(i);
TestCleanUpImpl();
}
}
If you have complicated setup and clean up could possibly implement a class that handles disposing and creating, handle setup in constructor, disposal in Dispose method
I wouldn't use this as my first choice, prefer to keep my tests as simple as possible, even if my tests do violate DRY it makes them much easier to follow, which means less debugging, which is a good trade off in my opinion
public class TestImplObj : IDisposable
{
public TestImplObj()
{
//Setup test
}
public void TestImpl(int i)
{
//Do the actual test
}
public void Dispose()
{
Dispose(true);
GC.SuppressFinalize(this);
}
protected virtual void Dispose(bool disposing)
{
if (disposing)
{
// Do the clean up here
}
}
}

Related

How do i check if Static variables are modified during unittest

I'm working with a rather large test set (over 5000 seperate test methodes) and it apears that some test sometimes modify static variables that they shouldn't, so i was wondering if there was a way to create a test that tests if a variable has been modified, during all the other tests
i'm unable to modify the variables directly but i can write unit test and modify the settings pertaining there to
i'm working in VS 2017 with C# 8.0 and mstest v.2
thanks
You can mark methods in your test class to run before or after each test.
So you can do something like this:
[TestClass]
public class MyTestClass
{
private int originalValue; // adjust the type as needed
[TestInitialize]
public void PreTest()
{
// remember the value before the test
originalValue = MyContainer.StaticValue; // or whatever it is called in your system
}
[TestCleanup]
public void PostTest()
{
// check the current value against the remembered original one
if (MyContainer.StaticValue != originalValue)
{
throw new InvalidOperationException($"Value was modified from {originalValue} to {MyContainer.StaticValue}!");
}
}
[TestMethod]
public void ExecuteTheTest()
{
// ...
}
}
For other attributes, see a cheatsheet

Any way to intruct MsTest when to run a Test Method?

Is there a way or approach like a method decorator or attribute for a test method that can say for example:
"Run Method C Before running Method B"
So basically you are creating a dependancy between C and B. I know tests are better off being atomic and should be but sometimes in it's better to keep your tests small and to the point. It makes sense not run a 'RemoveItem' test method when the item it is looking for is simply not there.
Most people would add the item before hand and then test to see if they can remove - 'All In the same test'. I don't like this approach and want to make my tests smaller, more to to point and more atomic as possible.
Like you said, you dont want interdependencies between your test. If you are not comfortable having an "Add" before the "Remove" in your remove test, thus testing the Add method in the wrong place, then I recommend using testInitialize to setup some objects the tests can act on. I do however recommend the practice of actually running Add before you run Remove, in the test of Remove.
[Testclass]
public class TestStacks
{
private Stack<string> emptyStack;
private Stack<string> singleItemStack;
[TestInitialize]
public void Setup()
{
singleItemStack = new Stack<string>();
singleItemStack.Push("Item");
emptyStack = new Stack<string>();
}
[TestMethod]
public void TestPush()
{
emptyStack.Push("Added");
Assert.AreEqual(1, emptyStack.Count);
}
[TestMethod]
public void TestRemove()
{
singleItemStack.Pop();
Assert.AreEqual(0, singleItemStack.Count);
}
[TestMethod]
[ExpectedException(typeof(InvalidOperationException))]
public void TestPopFromEmpty()
{
emptyStack.Pop();
}
}
If you need to have some item added before testing removal, then best place to add item is arrange part of removal test. This will make context of removal test clear.
But DRY principle also works here - you can move addition logic to separate helper method. Then call it twice - when testing addition, and when arranging context for removal:
[Testclass]
public class Tests
{
[TestMethod]
public void TestAddition()
{
AddItem();
// Assert addition
}
[TestMethod]
public void TestRemoval()
{
AddItem();
// Remove item
// Assert removal
}
private void AddItem()
{
// Add item
}
}

Unit Testing how I think to point logical mistake on class which is under test

I m reading Art of Unit testing book and I try to understand state based testing logic.In an example of subject there was a calculator class like
public class Calculator
{
private int sum=0;
public void Add(int number)
{
sum+=number;
}
public int Sum()
{
int temp = sum;
sum = 0;
return temp;
}
}
and book shows how can we test this as:
[TestFixture]
public class CalculatorTests
{
private Calculator calc;
[SetUp]
public void Setup()
{
calc = new Calculator();
}
[Test]
public void Sum_NoAddCalls_DefaultsToZero()
{
int lastSum = calc.Sum();
Assert.AreEqual(0,lastSum);
}
[Test]
public void Add_CalledOnce_SavesNumberForSum()
{
calc.Add(1);
int lastSum = calc.Sum();
Assert.AreEqual(1,lastSum);
}
[Test]
public void Sum_AfterCall_ResetsToZero()
{
calc.Add(1);
calc.Sum();
int lastSum = calc.Sum();
Assert.AreEqual(0, lastSum);
}
}
So until this, everything is great but, lets say I m writing a calculator class as much as that class, and i made method like
public int Sum()
{
return sum;
}
and
Test class like
[TestFixture]
public class CalculatorTests
{
private Calculator calc;
[SetUp]
public void Setup()
{
calc = new Calculator();
}
[Test]
public void Sum_NoAddCalls_DefaultsToZero()
{
int lastSum = calc.Sum();
Assert.AreEqual(0,lastSum);
}
[Test]
public void Add_CalledOnce_SavesNumberForSum()
{
calc.Add(1);
int lastSum = calc.Sum();
Assert.AreEqual(1,lastSum);
}
}
Let say I didnt good realize when i writing code and when i write unit test for that How I catch that following bug? Because bug is that sum will not be zero after 2 add method like following processes
add(1)
add(23)
sum() is 24 now
add(11)
add(12)
sum() => will be 47 but it has to be 23.
So how i think to get that logical mistake when i write unit test.(if i write it NUnit will tell me there is a mistake) then i come back and will see the point and I will change calculator class like
public int Sum()
{
int temp = sum;
sum = 0;
return temp;
}
I hope you understand what i try to say.
Thanks.
Basically you can't find all the edge cases for sure. However, you can specify what you intend the code to do and write clean code. If a calculator is supposed to reset its sum after asking for its sum, then that's part of the 'spec' that there should be a test for, its a 'requirement' invented by someone, so that should be easy to remember to write a test for.
The harder thing is all the edge cases created by the way something is coded. I used to do coding interviews where I would write unit tests for candidates code. I thought I had a good suite of tests to prove something worked. But I quickly found, that people can code things in ways that introduce hard to test for edge cases ( like something will fail on the 9th time it does something that seems like it should work every single time ). So mainly, if you follow the advice of TDD, write a test, write the code to make it pass, refactor to make the code clean, you won't go too far wrong.
and remember, this is not a magic bullet, this isn't some magic formula which allows you to write perfect code. You still need to think think think about what you are doing.
It sounds like you've basically already got a test case:
[Test]
public void CallingSumResets()
{
var calc = new Calculator();
calc.Add(10);
Assert.AreEqual(10, calc.Sum());
Assert.AreEqual(0, calc.Sum());
}
The test that it's actually performing addition would be done in other tests - this is just testing that after you call Sum the first time, it resets the internal state.
This test should fail:
[Test]
public void Sum_AfterCall_ResetsToZero()
{
calc.Add(1);
calc.Sum();
int lastSum = calc.Sum();
Assert.AreEqual(0, lastSum);
}
until you changed your code to reset the sum after Sum() is called. However, I would prefer to create a separate method Clear() rather than reset the sum in your getter.
TDD steps
Think what you want the Calculator to do.
Write a test for it.
Write
code to pass test.
If I understand you correctly, the sample code below is your implementation, and has a bug; It does not reset the sum value to zero as the correct implementation does, giving an error. Your question is, how to write a unit test for this?
public int Sum()
{
return sum;
}
Assuming I've interpreted your question correctly, you should simply write a test that detects whether the value is zero when invoked a second time:
add(11)
add(12)
sum() => ignore result
sum() => Should be zero

How to simulate throwing an exception in Unit tests?

How can I simulate an exception being thrown in C# unit tests?
I want to be able to have 100% coverage of my code, but I can't test the code with exceptions that may occur. For example I cannot simulate a power faluire that may occur.
For example:
public void MyMethod()
{
try
{
...
}
catch(OutOfMemoryException e)
{
...
}
catch(RandomErrorFromDatabaseLayer e)
{
...
}
}
I want to be able to simulate any kind of exception that is in this method and should be caught.
Are there any libraries that may help me in this matter?
Edit 1:
Any help in accomplishing what I asked with Moq?
You need to create a mock object that stands in for the real objects that can throw these exceptions. Then you can create tests that simply are something like this:
public void ExampleMethod()
{
throw new OutOfMemoryException();
}
If you are using a dependency injection framework it makes replacing the real code with the mock code much easier.
What you need is stub - an object that will simulate certain conditions for your code. For testing purposes, you usually replace real object implementation with stub (or other type of faked object). In your case, consider:
public class MyClass
{
private IDataProvider dataProvider;
public void MyMethod()
{
try
{
this.dataProvider.GetData();
}
catch (OutOfMemoryException)
{
}
}
}
Now, class you are testing should be configurable at some level - so that you can easily replace real DataProvider implementation with stubbed/faked one when testing (like you said, you don't want to destroy your DB - nobody wants!). This can be achieved for example by constructor injection (or in fact, any other dependency injection technique).
Your test then is trivial (some made-up requirement to test when exception is thrown):
[Test]
public void MyMethod_DataProviderThrowsOutOfMemoryException_LogsError()
{
var dataProviderMock = new Mock<IDataProvider>();
dataProviderMock
.Setup(dp => dp.GetData())
.Throws<OutOfMemoryException>();
var myClass = new MyClass(dataProviderMock);
myClass.MyMethod();
// assert whatever needs to be checked
}

How can I avoid multiple asserts in this unit test?

This is my first attempt to do unit tests, so please be patient with me.
I'm still trying to unit test a library that converts lists of POCOs to ADO.Recordsets.
Right now, I'm trying to write a test that creates a List<Poco>, converts it into a Recordset (using the method I want to test) and then checks if they contain the same information (like, if Poco.Foo == RS.Foo, and so on...).
This is the POCO:
public class TestPoco
{
public string StringValue { get; set; }
public int Int32Value { get; set; }
public bool BoolValue { get; set; }
}
...and this is the test so far (I'm using xUnit.net):
[Fact]
public void TheTest()
{
var input = new List<TestPoco>();
input.Add(new TestPoco { BoolValue = true, Int32Value = 1, StringValue = "foo" });
var actual = input.ToRecordset();
Assert.Equal(actual.BoolValue, true);
Assert.Equal(actual.Int32Value, 1);
Assert.Equal(actual.StringValue, "foo");
}
What I don't like about this are the three asserts at the end, one per property of the POCO.
I've read lots of times that multiple asserts in one test are evil (and I understand the reasons why, and I agree).
The problem is, how can I get rid of them?
I have Roy Osherove's excellent book "The Art of Unit Testing" right in front of me, and he has an example which covers exactly this (for those who have the book: chapter 7.2.6, page 202/203):
In his example, the method under test returns an AnalyzedOutput object with several properties, and he wants to assert all the properties to check if each one contains the expected value.
The solution in this case:
Create another AnalyzedOutput instance, fill it with the expected values and assert if it's equal to the one returned by the method under test (and override Equals() to be able to do this).
But I think I can't do this in my case, because the method that I want to test returns an ADODB.Recordset.
And in order to create another Recordset with the expected values, I would first need to create it completely from scratch:
// this probably doesn't actually compile, the actual conversion method
// doesn't exist yet and this is just to show the idea
var expected = new ADODB.RecordsetClass();
expected.Fields.Append("BoolValue", ADODB.DataTypeEnum.adBoolean);
expected.Fields.Append("Int32Value", ADODB.DataTypeEnum.adInteger);
expected.Fields.Append("StringValue", ADODB.DataTypeEnum.adVarWChar);
expected.AddNew();
expected.BoolValue = true;
expected.Int32Value = 1;
expected.StringValue = "foo";
expected.Update();
I don't like this either, because this is basically a duplication of some of the code in the actual conversion method (the method under test), which is another thing to avoid in tests.
So...what can I do now?
Is this level of duplication still acceptable in this special situation, or is there a better way how to test this?
I'd argue that in the spirit of the thing, this is fine. The reason that multiple asserts are "evil", if I recall correctly, is that it implies that you are testing multiple things in one test. In this case, you are indeed doing that in that you are testing each field, presumably to make sure this works for several different types. Since that's all an object equality test would do anyway, I think you are in the clear.
If you really wanted to be militant about it, write one test per property (j/k!)
Multiple assertions per unit test are perfectly fine in my book, as long as the multiple assertions are all asserting the same test condition. In your case, they're testing that the conversion was successful, so the test passing is conditional on all of those assertions being true. As a result, it's perfectly fine!
I'd classify "one assertion per test" as a guideline, not a hard-and-fast rule. When you disregard it, consider why you're disregarding it.
That said, a way around it is to create a single test class that, on class setup, runs your test process. Then each test is just an assertion on a single property. For example:
public class ClassWithProperities
{
public string Foo { get; set; }
public int Bar { get; set; }
}
public static class Converter
{
public static ClassWithProperities Convert(string foo, int bar)
{
return new ClassWithProperities {Foo=foo, Bar=bar};
}
}
[TestClass]
public class PropertyTestsWhenFooIsTestAndBarIsOne
{
private static ClassWithProperities classWithProperties;
[ClassInitialize]
public static void ClassInit(TestContext testContext)
{
//Arrange
string foo = "test";
int bar = 1;
//Act
classWithProperties = Converter.Convert(foo, bar);
//Assert
}
[TestMethod]
public void AssertFooIsTest()
{
Assert.AreEqual("test", classWithProperties.Foo);
}
[TestMethod]
public void AssertBarIsOne()
{
Assert.AreEqual(1, classWithProperties.Bar);
}
}
[TestClass]
public class PropertyTestsWhenFooIsXyzAndBarIsTwoThousand
{
private static ClassWithProperities classWithProperties;
[ClassInitialize]
public static void ClassInit(TestContext testContext)
{
//Arrange
string foo = "Xyz";
int bar = 2000;
//Act
classWithProperties = Converter.Convert(foo, bar);
//Assert
}
[TestMethod]
public void AssertFooIsXyz()
{
Assert.AreEqual("Xyz", classWithProperties.Foo);
}
[TestMethod]
public void AssertBarIsTwoThousand()
{
Assert.AreEqual(2000, classWithProperties.Bar);
}
}
I agree with all the other comments that it is fine to do so, if you are logically testing one thing.
There is however a difference between have many assertions in a single unit test than having a separate unit test for each property. I call it 'Blocking Assertions' (Probably a better name out there). If you have many assertions in one test then you are only going to know about a failure in the first property that failed the assertion. If you have say 10 properties and 5 of them returned incorrect results then you will have to go through fixing the first one, re-run the test and notice another one failed, then fix that etc.
Depending on how you look at it this could be quite frustrating. On the flip side having 5 simple unit tests failing suddenly could also be off putting, but it might give you a clearer picture as to what have caused those to fail all at once and possibly direct you more quickly to a known fix (perhaps).
I would say if you need to test multiple properties keep the number down (possibly under 5) to avoid the blocking assertion issue getting out of control. If there are a ton of properties to test then perhaps it is a sign that your model is representing too much or perhaps you can look at grouping properties into multiple tests.
Those 3 asserts are valid. If you used a framework more like mspec, it would look like:
public class When_converting_a_TestPoco_to_Recordset
{
protected static List<TestPoco> inputs;
protected static Recordset actual;
Establish context = () => inputs = new List<TestPoco> { new TestPoco { /* set values */ } };
Because of = () => actual = input.ToRecordset ();
It should_have_copied_the_bool_value = () => actual.BoolValue.ShouldBeTrue ();
It should_have_copied_the_int_value = () => actual.Int32Value.ShouldBe (1);
It should_have_copied_the_String_value = () => actual.StringValue.ShouldBe ("foo");
}
I generally use mspec as a benchmark to see if my tests make sense. Your tests read just fine with mspec, and that gives me some semi-automated warm fuzzies that I'm testing the correct things.
For that matter, you've done a better job with multiple asserts. I hate seeing tests that look like:
Assert.That (actual.BoolValue == true && actual.Int32Value == 1 && actual.StringValue == "foo");
Because when that fails, the error message "expected True, got False" is completely worthless. Multiple asserts, and using the unit-testing framework as much as possible, will help you a great deal.
This should be something to check out http://rauchy.net/oapt/
Tool that generates a new test case for every assert.

Categories

Resources