Unittests: Create Mock as property in the UnitTest class? - c#

When writing Unittests, I wonder if it is best practice to create a property for the mock like in this example (I am seeing this all over the tutorials in the internet):
public class CartControllerTests
{
private CartController controller;
private Mock<IPaymentService> paymentServiceMock;
[SetUp]
public void Setup()
{
cartServiceMock = new Mock<ICartService>();
}
}
(Taken and adjusted from https://softchris.github.io/pages/dotnet-moq.html#full-code)
Because when I write several Unittests and Setup the mock for a specific behaviour (per test), I think it could be possible, that one setup in the first test could override the second setup in the next test.
Wouldn't it be better to create an own Mock for every Unittest, and just setup it for the specific Test?

Related

Nunit not calling setup once per text fixture

I'm new to Nunit and am trying to run 2 Test Fixtures, A & B. Within each Fixture I have a unique Setup method for each. However, when I click "Run All" in the "Test Explorer" in Visual Studio, the test setup for Fixture A is called (it was executed first) and Setup for Fixture B is ignored. I also get the same behavior when running all tests via command line. Below is my code:
Fixture A
[TestFixture]
public class A
{
[SetUp]
public void SetupTest()
{
// ...Setup for Fixture A
}
[Test, Order(1)]
public void TestForFixtureA()
{
// ...perform test
}
}
Fixture B
[TestFixture]
public class B
{
[SetUp]
public void SetupTest()
{
// ...Setup for Fixture B
}
[Test]
public void TestForFixtureB()
{
// ...perform test
}
}
What is the correct way to get Setup methods to execute per Fixture?
You are using the incorrect attribute for setup at the test fixture level. The attribute you should be using is [SetUpFixture]. Information about this can be found in the nunit documentation
Here is a list of all the setup attributes and their usages taken from the documentation:
SetUpAttribute is now used exclusively for per-test setup.
TearDownAttribute is now used exclusively for per-test teardown.
OneTimeSetUpAttribute is used for one-time setup per test-run. If you
run n tests, this event will only occur once.
OneTimeTearDownAttribute is used for one-time teardown per test-run.
If you run n tests, this event will only occur once
SetUpFixtureAttribute continues to be used as at before, but with
changed method attributes.
This doesn't seem to explain the bizzare behaviour you are seeing, as setup should be ran per-test, but using the correct attributes couldn't hurt.
If you intend your setup to be run once per fixture, use [OneTimeSetUp]. But if you intend it to run once per test within the fixture, then [SetUp] is correct. We can't tell what you intend from the code.
Whichever one you use, the setups should all run. The only situation in which [OneTimeSetUp] will run but [SetUp] will not is when no individual tests are found within the fixture. Are all the tests being recognized?
I suggest you verify very clearly that the setup is not being run. The easiest way is to temporarily create some output from the setup method.

Clean Up after Canceling tests

I'm currently running tests through visual studio. Before all the tests are run, I automatically create a set number of users with know credentials, and at the end of the run I delete those users. However, sometimes I need to cancel my tests midway. In these cases the test never gets the chance to clean up, this means that there is left over fake user info from the test run and may causes the next test run to crash (when it attempts to add user info into the DB). Is there anyway to force visual studio/mstest to run a clean up method even if the test is canceled?
I know one option is to have the test check and make sure that the user info doesn't already exist, and if it does remove it before creating the new users. But this still wouldn't solve the issue of the canceled test run leaving unwanted test data.
Update:
Sorry for the miscommunication, however cleaning up the data at the start of the test is not an option. I'm giving a very simplistic view of the issue, but put simply, I have no easy way of making sure that no test data exists at the start of the test. All clean up must occur at the end of the test.
That is impossible. You better find an alternative solution like using separate database for testing and clean all data before each test run, using fixed set of test users or mark test data with some flag. Check Isolating database data in integration tests article by Jimmy Bogard.
There is no built-in way to change MSTest default behavior. In theory you can write MSTest extension that utilizes TestExecution.OnTestStopping event, but that is not an easy process and it requires registry change. Moreover, a lot of people complain that it is not working.
There also MSTest V2, a new version of MSTest with new extensibility points. But it looks like you can't alter cancel behavior with this points, only write attribute decorators. See Extending MSTest V2.
You can't use AppDomain.CurrentDomain.ProcessExit and Process.GetCurrentProcess().Exited events because cancel seems to kill test run process.
NUnit also doesn't support this at the moment. See related NUnit test adapter Run TearDowns on VS Cancel Test Run issue.
Instead of calling the cleanup function at the end of the test, I call mine at the beginning of each test in order to address this exact problem.
Perform the clean up before creating the data as well, this will ensure that you have no leftover data whatever happens. Of course this is only possible if you can identify any leftover data before running the setup.
The idea is that a transaction is initialized before the test begins. In order for the data to be saved in the database, the transaction must be commited, but it is not commited never. It works in case when test a stop, in the case of a successful or unsuccessful completion of the test.
In integration tests we use somthing like this (with NUnit)(It real production code)
public class RollbackAttribute : TestAttribute, ITestAction
{
private TransactionScope _transaction;
public void BeforeTest(ITest test)
{
_transaction = new TransactionScope();
}
public void AfterTest(ITest test)
{
_transaction.Dispose();
}
public ActionTargets Targets => ActionTargets.Test;
}
[TestFixture]
public class SomeTestClass
{
[Rollback] //No need [Test] because Rollback is inherit it.
public void SomeTestMethod()
{
}
}
On MsTest you can make somthing similar, but in this case you should inherit from base class, I hope it works. For example:
public class RollbackTestBase
{
private TransactionScope _transaction;
[TestInitialize]
public virtual void Setup()
{
_transaction = new TransactionScope();
}
[TestCleanup]
public virtual void TearDown()
{
_transaction.Dispose();
}
}
[TestClass]
public class IntegrationTest : RollbackTestBase
{
[TestMethod]
public void TestDataBase()
{
Assert.IsTrue(true);
}
[TestInitialize]
public virtual void Init()
{
}
[TestCleanup]
public virtual void CleanUp()
{
}
}
There are 2 cases which we need to consider while allocating resources in ATPs (Resources might be Creating users, Connection with database). They are
Creation and deletion of resources after each test.
Creation and deletion of resources after set of tests.
Creation and deletion of resources after each test:
If we want to create instance of particular object before execution of a test and want to clean up memory allocated to that object after execution of that test, then we use Test SetUp and Test TearDown attributes of NUnit. In your case object is creation of number of Users.
[SetUp] : Function which is decorated with Test SetUp attribute contains piece of code that executes before the execution of any test.
[TearDown] : Function which is decorated with Test TearDown attributes contains piece of code that executes after execution of any test
Implementation:
[TestClass]
public class UnitTest1
{
[SetUp]
public void SetUP()
{
// Creating Users with proper credentials
}
[TestMethod]
public void TestMethod1()
{
//Write your ATP
}
[TearDown]
public void TearDown()
{
//Clean up
}
}
Creation and deletion of resources after set of tests:
Now If we want to create instance of an object for set of tests and want to clean up memory after execution of all tests then [TestFixtureSetUp] and [TestFixureTearDown] to initialize an object and to clean up memory respectively. Again In your case object can be creation of set of users.
[TestFixtureSetUp] : Function decorated with TestFixtureSetUp will executes once before the execution of group of tests.
[TestFixtureTearDown] : Function decorated with TestFixtureTearDown will executes once after the execution of group of tests.
Implementation
[TestFixture]
public class Tests
{
[TestFixtureSetUp]
public void Setup()
{
//Create users with credentials
}
[Test]
public void _Test1()
{
//Test_1
}
[Test]
public void _Test2()
{
//Test2
}
[TestFixtureTearDown]
public void CleanUp()
{
//Cleanup; Here you need to add code to Delete all users
}
}
Note: I will suggest you, if you are trying to create and delete users for particular ATP then go with SetUp and TearDown. If you are trying same for bunch of ATPs, I would recommend you to go with TestFixtureSetUp and TestFixtureTearDown.
"If your test get pass or fail, SetUp and TearDown functions will execute"
References:
#Shuvra's Answer.
Nunit: SetUp, TearDown, SetUpFixture, TearDownFixture
I think you should open a transaction before you test, create the data and finish test test. But do not commit the transaction. That will ensure that the test can't affect your DB at all.
Update:
The easier approach is use docker container.
You can run a container from your image and remove that container after the test is done. This should definitely reduce the complexity of your test.
Visual studio uses NUNIT therefore, you can use TearDownAttribute. It should run after the test, even if the test is canceled. You can write a function to clean your data.
Please read the reference documentation here: http://nunit.org/docs/2.2/teardown.html
Just to clear more about the NUNIT standrad. Please follow the steps in the Test class:
[TestFixture]
public class _TestClass
{
[TestFixtureSetUp]
public void Setup()
{
//Clearup can be here before start of the tests. But not Recommended
}
[Test]
public void _Test1()
{
}
[Test]
public void _Test2()
{
}
[TestFixtureTearDown]
public void CleanUp()
{
//I will recommend to clean up after all the tests complete
}
}
Reference: http://nunit.org/docs/2.5/fixtureTeardown.html
A better solution to the problem to use what is called "database mocking". In this case you would have your tests run with a different database (or a fake, virtual one).
This article explains how to implement it in C#
https://msdn.microsoft.com/en-us/library/ff650441.aspx
You should begin a transaction and not commit your records to the DB. Thus, all your changes will be automatically rollbacked when the session is over.

writing a test using moq unit testing framework

I'm a bit confused with writing unit tests in C#. I've written the following class for learning Moq. I've noticed the [SetUp] is actually a reference from NUnit. How can I write a test class that only uses one framework or another or whether that's possible? If I want to use Moq what attributes am I missing to successfully run this test? I know there's [TestFixture], [TestMethod] etc.. but which one do I use for Moq!
Thanks,
James
public class CatalogCommandTests
{
private Mock<IReferenceData> _mockReferenceData;
[SetUp]
public void TestInitialize()
{
_mockReferenceData = new Mock<IReferenceData>();
}
public void TestMyGetMethodReturnsListOfStrings()
{
var contractsList = new List<Contract>();
_mockReferenceData.Setup(x => x.MyGetMethod()).Returns(contractsList);
}
}
Your mock looks good. To get the mock of IReferenceData you have to call _mockReferenceData.Object.
A method decorated with the SetUp Attribute will be executed before each test method is called.
(See: Setup).
If you want that TestMyGetMethodReturnsListOfStrings gets called you have to decorate the method with the Test attribute
(See: Test).

How to create a restful web service with TDD approach?

I've been given a task of creating a restful web service with JSON formating using WCF with the below methods using TDD approach which should store the Product as a text file on disk:
CreateProduct(Product product)
GetAProduct(int productId)
URI Templates:
POST to /MyService/Product
GET to /MyService/Product/{productId}
Creating the service and its web methods are the easy part but
How would you approach this task with TDD? You should create a test before creating the SUT codes.
The rules of unit tests say they should also be independent and repeatable.
I have a number of confusions and issues as below:
1) Should I write my unit tests against the actual service implementation by adding a reference to it or against the urls of the service (in which case I'd have to host and run the service)? Or both?
2)
I was thinking one approach could be just creating one test method inside which I create a product, call the CreateProduct() method, then calling the GetAProduct() method and asserting that the product which was sent is the one that I have received. On TearDown() event I just remove the product which was created.
But the issues I have with the above is that
It tests more than one feature so it's not really a unit test.
It doesn't check whether the data was stored on file correctly
Is it TDD?
If I create a separate unit test for each web method then for example for calling GetAProduct() web method, I'd have to put some test data stored physically on the server since it can't rely on the CreateProduct() unit tests. They should be able to run independently.
Please advice.
Thanks,
I'd suggest not worrying about the web service end points and focus on behavior of the system. For the sake of this discussion I'll drop all technical jargon and talk about what I see as the core business problem you're trying to solve: Creating a Product Catalog.
In order to do so, start by thinking through what a product catalog does, not the technical details about how to do it. Use that as your starting points for your tests.
public class ProductCatalogTest
{
[Test]
public void allowsNewProductsToBeAdded() {}
[Test]
public void allowsUpdatesToExistingProducts() {}
[Test]
public void allowsFindingSpecificProductsUsingSku () {}
}
I won't go into detail about how to implement the tests and production code here, but this is a starting point. Once you've got the ProductCatalog production class worked out, you can turn your attention to the technical details like making a web service and marshaling your JSON.
I'm not a .NET guy, so this will be largely pseudocode, but it probably winds up looking something like this.
public class ProductCatalogServiceTest
{
[Test]
public void acceptsSkuAsParameterOnGetRequest()
{
var mockCatalog = new MockProductCatalog(); // Hand rolled mock here.
var catalogService = new ProductCatalogService(mockCatalog);
catalogService.find("some-sku-from-url")
mockCatalog.assertFindWasCalledWith("some-sku-from-url");
}
[Test]
public void returnsJsonFromGetRequest()
{
var mockCatalog = new MockProductCatalog(); // Hand rolled mock here.
mockCatalog.findShouldReturn(new Product("some-sku-from-url"));
var mockResponse = new MockHttpResponse(); // Hand rolled mock here.
var catalogService = new ProductCatalogService(mockCatalog, mockResponse);
catalogService.find("some-sku-from-url")
mockCatalog.assertWriteWasCalledWith("{ 'sku': 'some-sku-from-url' }");
}
}
You've now tested end to end, and test drove the whole thing. I personally would test drive the business logic contained in ProductCatalog and likely skip testing the marshaling as it's likely to all be done by frameworks anyway and it takes little code to tie the controllers into the product catalog. Your mileage may vary.
Finally, while test driving the catalog, I would expect the code to be split into multiple classes and mocking comes into play there so they would be unit tested, not a large integration test. Again, that's a topic for another day.
Hope that helps!
Brandon
Well to answer your question what I would do is to write the test calling the rest service and use something like Rhino Mocks to arrange (i.e setup an expectation for the call), act (actually run the code which calls the unit to be tested and assert that you get back what you expect. You could mock out the expected results of the rest call. An actual test of the rest service from front to back would be an integration test not a unit test.
So to be clearer the unit test you need to write is a test around what actually calls the rest web service in the business logic...
Like this is your proposed implementation (lets pretend this hasn't even been written)
public class SomeClass
{
private IWebServiceProxy proxy;
public SomeClass(IWebServiceProxy proxy)
{
this.proxy = proxy;
}
public void PostTheProduct()
{
proxy.Post("/MyService/Product");
}
public void REstGetCall()
{
proxy.Get("/MyService/Product/{productId}");
}
}
This is one of the tests you might consider writing.
[TestFixture]
public class TestingOurCalls()
{
[Test]
public Void TestTheProductCall()
{
var webServiceProxy = MockRepository.GenerateMock<IWebServiceProxy>();
SomeClass someClass = new SomeClass(webServiceProxy);
webServiceProxy.Expect(p=>p.Post("/MyService/Product"));
someClass.PostTheProduct(Arg<string>.Is.Anything());
webServiceProxy.VerifyAllExpectations();
}
}

Multiple [SetupTest] for different configs

Is it possible to have multiple [SetupTest]'s in a fixture?
I am using Selenium and nUnit and would like to be able to specify the Browser on which the user wants to test.
I have a simple user GUI for selecting tests to run but, I am aware that in the future we want to hook this up to cruise control to run the tests automatically. Ideally I want tests that can be run on both our GUI and the NUnit GUI.
Is it possible to have multiple [SetupTest]'s in a fixture? No.
It is possible to define all your tests in a base class, have multiple fixtures inherit the tests, and then select an environment dependent fixture type at runtime.
Here’s the stock example that I have for [TestFixtureSetup]. The same principle works for all the setup attributes. Notice that I’m only putting [TestFixture] on the child classes. Since the base “TestClass” doesn’t have complete setup code, you don’t want to run the tests directly.
public class TestClass
{
public virtual void TestFixtureSetUp()
{
// environment independent code...
}
[Test]
public void Test1() { Console.WriteLine("Test1 pass."); }
// More Environment independent tests...
}
[TestFixture]
public class BrowserFixture : TestClass
{
[TestFixtureSetUp]
public override void TestFixtureSetUp()
{
base.TestFixtureSetUp();
// environment dependent code...
}
}
[TestFixture]
public class GUIFixture : TestClass
{
[TestFixtureSetUp]
public override void TestFixtureSetUp()
{
base.TestFixtureSetUp();
// environment dependent code...
}
}
I suspect you can use the parameterized tests introduced in NUnit 2.5 to do what you want, but I'm not totally clear what you want to do here. However, you could define the fixture and have it take a Browser variable in its constructor, then use parameterized TestFixture attributes like
TextFixture["Firefox"]
TestFixture["Chrome"]
public class ParameterizedTestFixture {
//Constructor
public ParameterizedTestFixture( string Browser) {
//set fixture variables relating to browser treatment
}
//rest of class
}
See NUnit Documentation for more info.
The Setup attribute identifies a method that is run before each test. It only makes sense to have one Setup per test fixture - think of it as a 'reset' or 'preparation' before each test is run.

Categories

Resources