NUnit base class - c#

I am approaching database testing with NUnit. As its time consuming so I don't want to run everytime.
So, I tried creating the base class and every other database testing classes derive from it as I thought if I will decorate the base class with [Ignore] attribute then rest of the derived classes will get ignored, but thats not happening.
I need to know is there any way to Ignore set of the classes with minimal effort?

If you don't want to split out integration and unit tests into separate projects you can also group tests into categories
[Test, Category("Integration")]
Most test runners allow you to filter which categories to run which would give you finer grained control if you need it (e.g. 'quick', 'slow' and 'reaaallyy slow' categories)

A recommended approach is seperating your unit tests that can run in isolation from your integration tests into different projects, then you can choose which project to execute when you run your tests. This will make it easier to run your faster running tests more often, multiple times daily or even hourly (and hopefully without ever having to worry about such things as configuration), while letting your slower running integration tests run on a different schedule.

Related

Functional tests fail when run together, but pass when run separately

I'm implementing a project using .net-core and microservices base on eShopOnContainers. In my functional test I previously put all the tests in one class called IntegrationEventsScenarios.cs and now I need to separate tests by their related services. But when I run Basket Catalog and Ordering test scenarios together tests will not pass.
I have tried putting for example basket Scenariobase and Teststarup classes in Ordering folder to remove any shared resource but it didn't work.

Shared Context between .NET Unit test Classes

I understand that ideal unit tests should not share any context between them but I have a dilemma: I am trying to unit test against a piece of software that only has a single license that can be instantiated at a time.
.NET unit tests seem to run in parallel so when I click "Run all tests" the classes are all run simultaneously and most fail because they can't all have a license.
So I see two questions that are mutually exclusive:
How do I share the context between C# unit testing classes?
OR How do I force .NET unit tests to NOT run in parallel?
Clarification: The licensed software is not what I'm trying to test, it's just the tool I need to DO the test
Normally I'd consider Singleton an anti-pattern since it makes unit testing impossible.
But this a good use case to have a Singleton.
A real singleton, with a private constructor and a static constructor, will run only once and will be thread-safe.
This way you can keep your tests running in parallel.
I'm not sure if this is what you might be looking for but would that work if you run all tests at the same time however each of them runs on a separate AppDomain?
For reference I used the cross domain delegates where you could pass your actual test: https://msdn.microsoft.com/en-us/library/system.appdomain.docallback(v=vs.110).aspx
Let me know if that's works!

Dependent Tests

Alright, so I'm using the Microsoft Test Framework for testing and I need to somehow build dependent tests. Why you might say? Because one of the tests ensures that I can load data from the database and the others need to operate against that set -- making the round trips is something we don't want to do to keep the automated nature of the tests efficient.
I have been searching and just can't seem to find a way to do a couple of things:
Decorate the test methods so that they are seen as dependent.
Persist the set between tests.
What have I tried?
Well, regarding the decoration for dependent tests, not much because I can't seem to find an attribute that's built for that.
Regarding the persistence of the set, I've tried a private class variable and adding to the Properties of the test context. Both of those have resulted in the loss of the set between tests.
I hope that you can help me out here!
Create your test separately and then use the Ordered Test to run them in the order you want.
If any of the tests fails then the whole test is aborted and it considered failed:
I think, what you need is an orderer test list. You can create this in you test project under 'New Item...'. The ordered test list is a list with tests in a specified order that are executed in the same context.
By the way: Unit tests should test only the smallest unit in an application, not a huge set of operations. Because if one unit is not working correctly you can find easy wich one.

How to split unit tests into groups

I am using Visual Studio 2008 and I would like to be able to split up my unit tests into two groups:
Quick tests
Longer tests (i.e. interactions with database)
I can only see an option to run all or one, and also to run all of the tests in a unit test class.
Is there any way I can split these up or specify which tests to run when I want to run a quick test?
Thanks
If you're using NUnit, you could use the CategoryAttribute.
The equivalent in MSTest is the TestCategory attribute - see here for a description of how to use it.
I would distinguish your unit test groups as follows:
Unit Tests - Testing single methods / classes, with stubbed dependenices. Should be very quick to execute, as there are only internal dependencies.
Integration Tests - Testing two or more components together, such as your Data Access classes with an actual backed database. These are generally lengthy, as you may be dealing with an external dependency such as a DB or web service. However, these could still be quick tests depending on what components you are integrating. The key here is the scope of the test is different than Unit Tests.
I would create seperate test libraries, i.e. MyProj.UniTests.dll and MyProj.IntegrationTests.dll. This way, your Unit Tests library would have fewer dependenices than your Integration tests. It will then be easy to specify which test group you want to run.
You can set up a continuous integration server, if you are using something like that, to run the tests at different times, knowing that group 1 is quicker than the second. For example, Unit Tests could run immediatley after code is checked in to your repository, and Integration Tests could run overnight. It's easy to set something like this up using Team City
There is the Test List Editor. I'm not at my Visual Studio computer now so I'll just point to this answer.

Testing Classes - When to refactor?

I'm writing a series of automatic tests in C# using NUnit and Selenium.
Edit: I am testing an entire website, to begin I wrote three classes for the three types of members that use the website, these classes contain methods which use selenium to perform various actions by these members. These classes are then created and their methods called by my test classes with the appropriate inputs.
My question is:
Does it matter how large my test class becomes? (i.e. thousands of tests?)
When is it time to refactor my functionality classes? (25 or 50 methods, 1000 lines of code, etc)
I've been trying to read all I can about test design so if you have any good resources I would appreciate links.
Does it matter how large my test class becomes? (i.e. thousands of tests?)
Yes it does. Tests need to be maintained in the long term, and a huge test class is difficult to understand and maintain.
When is it time to refactor my functionality classes? (25 or 50 methods, 1000 lines of code, etc)
When you start to feel it is awkward to find a specific test case, or to browse through the tests related to a specific scenario. I don't think there is a hard limit here, just as there is no hard limit for the size of production classes or the number of methods. I personally put the limits higher for test code than for production code, because test code tends to be simpler, so the threshold where it starts to become difficult to understand is higher. But in general, a 1000 line test class with 50 test methods starts to feel too big for me.
I just recently had to work with such a test class, and I ended up partitioning it, so now I have several test classes each testing one particular method / use case of a specific class*. Some of the old tests I managed to convert into parameterized tests, and all new tests are written as paramterized tests. I found that parameterized tests make it much easier to look through the big picture, and keep all test cases in mind at once. I did this using JUnit on a Java project, but I see NUnit 2.5 now offers parameterized tests too - you should check it out.
*You may rightly ask shouldn't the class under test be refactored if we need so many test cases to cover it - yes it should, eventually. It is the largest class in our legacy app, with way too much stuff in it. But first we need to have the test cases in place :-) Btw this may apply to your class too - if you need so many test cases to cover it, it might be that the class under test is just trying to do too much, and you would be better off extracting some of its functionality into a separate class, with its own unit tests.

Categories

Resources