I have an NUnit test project that is testing a Console App project. The Console app project uses the app.config file heavily. When running tests from my NUnit test project, the code being tested uses the config values in my Tests.dll.config file. This file is located in my Test project's root directory and is a copy of the app.config file from the app being tested.
However, in some tests I need to change the value of some of the config settings. I have been using this in my Nunit test to do it:
ConfigurationManager.AppSettings.Set("SettingKey" , "SettingValue");
I don't want these runtime config changes I make in one test to interfere or be seen by any other tests. Is this the correct way to do it?
UPDATE
I should also mention that my tests run in parallel. I think this is because I am using Re-sharper.
So if I change the config in one test I think it may change the config in another test, which I don't want.
Is it possible to wrap code reading configuration in a interface?
For example:
public interface IAppSettings
{
string Get(string settingKey);
}
So you can easily abstract from the app.settings file.
Then in your NUnit project you can implement IAppSettings by a simple class and configure your app to use it. In that case you don't need to read a real app.config file and can easily change configuration. It will also speed up your tests.
It seems you are interested in an integration test and not a unit test. The reason being that your tests need to access the configuration file modify some values so your tests can run correctly.
You said you don't want your runtime config changes to interfere with other tests.
The best way to handly this is to use NUnit built-in test initialize and tear-down approach.
For example, you can use
A. NUnit [setup] attribute make runtime changes to your config
B. NUnit [teardown] attribute rollback the changes you did
See more info
http://www.nunit.org/index.php?p=setup&r=2.2.10
Note each that each and every test in your class would run the above setup/teardown sequence.
So if you need more customized approach, you can simply create your own setup method and call it from the test you need after ther assert call a tear-down method to clean-up/rollback what you did in your setup method. This way it's only affecting for the tests you need it to change config values.
For example (in pseudo code)
Public void Mytestmethod()
Call my own test init
Call SUT
Perform Any asserts
Call My own tear down
Related
TL;DR: As per title: How can I run the same tests but have different setups?
I'm working in C#, Visual Studio, deploying via Azure DevOps (nee VSTS) to a WebApp in Azure.
The software under development configures resources in Azure, and identities in Azure Active Directory.
This means I have tests like:
[Fact]
public void ShouldCreateAzureDevOpsProject()
{
_azureDevOpsHelper.GetAzureDevOpsProject(_projectName).Should().NotBe(null);
}
[Fact]
public void ShouldPutDefaultFileInRepo()
{
_azureDevOpsHelper.GetDefaultFileFromRepo(_projectName).Should().NotBe(null);
}
[Fact]
public void ShouldEnableAllMicrosoftResourceProviders()
{
_azureSubscriptionHelper.GetMicrosoftResourceProviders().Select(x => x.RegistrationState).Should().NotContain("NotRegistered");
}
I want to run these tests against the code as I write it. The code runs on my laptop, so the setup (which I currently have in an xUnit Fixture) is:
new EngineOrchestrator.EngineOrchestrator().RequestInstance(userSuppliedConfiguration);
But those tests are equally applicable to being run in our deployment pipeline, to check for regression after deploying to our test environment.
For those purposes, the set up would involve creating a HTTP client, and hitting the endpoint of the application.
The point is that the tests are identical, regardless of the setup. The values-to-test-for are sourced from a json configuration file in both the 'local' and 'pipeline' cases; isolation is achieved by subbing in a different configuration file for the tests during deployment.
Put another way; I'm trying to work out how I can encapsulate the setup, so that two different setups can share the same tests. That's the reverse of what fixtures etc do, where multiple tests can share the same setup.
I have tried:
Switching the "setup" depending on what machine is running the test
if (Environment.MachineName.StartsWith("Plavixo"))
{
new EngineOrchestrator.EngineOrchestrator().RequestInstance(userSuppliedConfiguration);
}
else
{
HttpEngineHelper.RunOrchestrator(userSuppliedConfiguration, authenticationDetails);
}
This is my current solution, but it feels brittle, and makes the test artifact huge because it necessarily includes all the source to be able to new up the Engine, even when it's going to run on the build machine.
Making the Tests Abstract, and inheriting to concrete classes that have their the specific setup in their constructors.
public class LocalBootstrap : BootstrapTests.BootstrapTests
{
public LocalBootstrap():base()
{
//do specific setup here
public abstract class BootstrapTests
{
[Fact]
public void ShouldCreateAzureDevOpsProject()
This sort of worked, but the set up ran before each and every test, which make sense: "xUnit.net creates a new instance of the test class for every test that is run, so any code which is placed into the constructor of the test class will be run for every single test."
Making the Fixture abstract
A fixture runs once and is shared between tests. I tried making the fixture abstract, and having a concrete class for each of my set ups.
xUnit throws a System.AggregateException : Class fixture type may only define a single public constructor. This is referenced by a github issue that has been closed as "By Design"
Running my local tests on a localhost
This is my next option to investigate. Is this a good idea?
Are there any other things I should try?
I ended up working out a solution that works for us.
The crux of the solution was making the tests abstract, and inheriting from that abstract class with a couple of concrete classes that captured the setup I wanted.
This is a simplified diagram of what we now have.
We're still using fixtures because we want the Arrange code and the Act code to run only once.
This arrangement works really well for us because we can run the same tests with multiple set ups (we have Bootstrap and Full), and with multiple invocation styles (run remotely vs run locally). We intend to add more set up for Greenfield and Brownfield (which is to do with whether our code has run before or not).
I recently starting using NUnit to do integration testing for my project. It's a great tool, but I've found one drawback that I cannot seem to get the answer to. All my integration tests use the TestCaseSource attribute and specify a test case source name for each test. Now the problem is that preparing these test case sources takes quite some time (~1 min.) and if I'm running a single test, NUnit always loads EVERY SINGLE test case source, even if it's not a test case source for the test that I'm running.
Can this behavior be changed so that only the test case source(s) for the test I'm running load? I want to avoid creating new assemblies every time I want to create a new test (seems rather superfluous and cumbersome, not to mention, hard to maintain), since I've read that tests in different assemblies are loaded separately, but I don't know about the test case sources. It's worth mentioning that I'm using Resharper as the test runner.
TL;DR: Need to tell NUnit to only load the TestCaseSources that are needed for the tests running in the current session. Current behavior is that ALL TestCaseSources are loaded for any test that is run.
Could you do this by moving your sources instantiation to a helper method and call them in the setup methods for each set of tests?
I often have a set of helper methods in my integration test suite that set up shared data for different tests.
I call just the helper methods that I need for the current suite in the [Setup]
I am using MSTest, and I want to set the same test category for all methods in test class at once, without setting TestCategory attribute to each method individually. How can this be done?
The most convenient and obvious way would be to set TestCategory attribute on class, but it can be applied to methods only.
The ultimate goal is to skip integration tests during test run on TFS check-in.
To be able to set the [TestCategory] attribute at the class level, install the “MSTest V2” TestFramework using NuGet.
Ref: https://blogs.msdn.microsoft.com/devops/2016/06/17/taking-the-mstest-framework-forward-with-mstest-v2/
I've been looking to do something similar, and I've arrived at a solution that works really well for my purposes.
This does not solve the problem of applying a TestCategory on a per-class basis, but you can use the /test: command line argument for mstest to specify a search string to match any part of the fully qualified method name of the test. That means you can generally match against the class, or the namespace, or whatever search string you can arrive at that will match the target tests. And if that doesn't do it, you can use the /test: argument multiple times. I.e:
> mstest /testcontainer:My.dll /test:My.FullyQualified.Namespace
/test:My.FullyQualified.OtherNamespace.OtherClass
More Info
Edit:
Adding the TestCategory attribute at the class level is now available with MSTest V2, as noted in NomadeNumerique's answer below. Details
The ultimate goal is to skip integration tests during test run on TFS
check-in.
There are other ways to do this. In your TFS builds, you can set which unit tests you want to run, depending on their assembly name.
As the default behaviour, it will run all unit tests in assemblies which have "test" in their name. A simple fix would be to rename your integration tests to something that does not include "test".
If you do want to use the categories, you could try using AOP. For example, with Postsharp, you can create an aspect in your integration test assembly that puts the attribute on the method. Then enable the aspect for all public methods in your integration assembly if all tests are grouped in one dll or on each integration test class.
One way to get around this limitation is to put the test category in the beginning of each test method. For example, name your unit tests
public void UnitTestDoSomething_ExpectThis()
and your integration test
public void IntegrationTestDoSomething_ExpectThis()
Then when you do your TFS query to get the integration tests you can do
Field[Automated Test Name] with Operator[Contains] and Value[IntegrationTest]
Although this is not a perfect solution, it will help you distinguish your tests in code and TFS. Alternatively, you can look at area and iteration paths.
You can group by "Class name" into Test Explorer panel.
With the test TestCategory attribute you can not solve your issue just because attributes in C# are meta-data and can not be used as dynamic values.
In VS 2017 this is possible (and looks to be part of VS2012 update 1).
You can put [TestCategory("Integration")] on a class in your unit test and have it apply to all tests, and likewise [TestCategory("Unit")] on your unit test classes.
You can then use the Test Explorer's Search bar to filter by Trait name = Unit and the "Run All" will only run the tests matching your search.
When you go to run these tests on your build server, you can use a switch like /category:Unit to only run the unit tests.
I am using MSTest in Visual Studio 2008 with C#. I have a specific environment variable I would I would like to and a path modification I would like to do only during the run of either specific tests or better yet all test within a run configuration.
I tried using the test run configuration Setup script to do this but as I expected since it is a batch file the changes are lost once it exits, so that wont work.
Is there any other way to setup temporary system environment variables that will be valid during all tests being run?
While am not happy with this solution, I was able to get what I needed done with MSTest by using the ClassInitializeAttribute for a test class, and then using Environment.SetEnvironmentVariable to make the changes I need, and then clean this up in the method decorated with theClassCleanupAttribute.
With lack of a better answer this was how I was able to get environment variables set for a group of tests and clean it up when I was done. However I would have prefered this to be handled outside of the CODE and be part of test configuration in some way. Regardless issues has been resolved.
If you trust that your test suite won't be "aborted" mid-test, you can use FixtureSetup and FixtureTeardown methods to set and then remove your changed environment variables.
EDIT FROM COMMENT: I see where you're coming from, but as in my edit, a UT framework is deisgned to be used to create unit tests. The concept of a unit test dictate that it should NOT depend on any outside resources, including environment variables. Tests that do this are integration tests, and require a lot of infrastructure to be in place (and usually take many times longer than a unit test suite of equal LOC).
To create a unit test for code that depends on an environment variable, consider splitting out the lines of code that actually examine the environment variables directly,. and put that into a method in another class, then mock that class using RhinoMocks or whatever to provide a "dummy" value for testing without examining (or changing) actual environment variables.
If this really is an integration test and you really need the environment variable set (say you're changing the path so you can use Process.Start to call your own notepad.exe instead of Windows'), that's what the FixtureSetup and FixtureTeardown methods/attributes are for; to perform complicated setup of a fixed, repeatable environment in which the tests should succeed, and then reset the environment to the way it was, regardless of what happened in the tests. Normally, a test failure throws an exception and ends that test's execution immediately, so code at the end of the test method itself is not guaranteed to run.
I have recently begun using NUnit and now Rhino Mocks.
I am now ready to start using it in a C# project.
The project involves a database library which I must write.
I have read that test should be independent and not rely on each other nor the order of test execution.
So suppose I wanted to check the FTP or database connection.
I would write something like
[Test]
public void OpenDatabaseConnection_ValidConnection_ReturnsTrue()
{
SomeDataBase db = new SomeDataBaseLibrary(...);
bool connectionOk = db.Open();
Assert.IsTrue(connectionOk);
}
Another test might involve testing some database functionality, like inserting a row.
[Test]
public void InsertData_ValidData_NoExceptions()
{
SomeDataBase db = new SomeDataBaseLibrary(...);
db.Open();
db.InsertSomeRow("valid data", ...);
}
I see several problems with this:
1) The problem is that the last test, in order to be independent on the first test, will have to open the database connection again. (This will also require the first test to close the connection again, before it's open.)
2) Another thing is that if SomeDataBaseLibrary changes, then all the test-methods will have to change as well.
3) The speed of the test will go down when all these connections have to be established every time the test runs.
What is the usually way of handling this?
I realize that I can use mocks of the DataBaseLibrary, but this doesn't test the library itself which is my first objective in the project.
1:
You can open 1 connection before all your tests, and keep it open, until all the tests that use that connection have ended. There are certain attributes for methods, much like the [Test] attribute, that specify when that method should be called:
http://www.nunit.org/index.php?p=attributes&r=2.2.10
Take a look at:
TestFixtureSetUpAttribute (NUnit 2.1)
This attribute is used inside a TestFixture to provide a single set of functions that are performed once prior to executing any of the tests in the fixture. A TestFixture can have only one TestFixtureSetUp method. If more than one is defined the TestFixture will compile successfully but its tests will not run.
So, within the method defined with this attribute, you can open your database connection and make your database object global to the test environment. Then, every single test can use that database connection. Note that your tests are still independent, even though they use the same connection.
I believe that this also addresses your 3rd concern.
I am not quite sure how to answer your 2nd concern, as I do not know the extent of the changes that take place in the SomeDataBaseLibrary class.
Just nit-picking, these tests are integration tests, not unit tests. But that doesn't matter right now that I can tell.
As #sbenderli pointed out, you can use TestFixtureSetUp to start the connection, and write a nearly-empty test that just asserts the condition of the DB connection. I think you'll just have to give up on the ideal of 1 bug -> 1 test failing here as multiple tests require connecting to the test database. If using your data access layer has any side-effects (e.g. caching), be extra careful here about interacting tests (<-link might be broken).
This is good. Tests are supposed to demonstrate how to use the SUT (software under test--SomeDataBaseLibrary in this case). If a change to the SUT requires change to how it's used, you want to know. Ideally, if you make a change to SomeDataBaseLibrary that will break client code, it will break your automated tests. Whether you have automated tests or not, you will have to change anything depending on the SUT; automated tests are one additional thing to change, but they also let you know that you have to make said change.
taken care of with TestFixtureSetUp.
One more thing which you may have taken care of already: InsertData_ValidData_NoExceptions does not clean up after itself, leading to interacting tests. The easiest way I have found to making tests clean up after themselves is to use a TransactionScope: Just create one in your SetUp class and Dispose it in TearDown. Works like a charm (with compatible databases) in my experience.
EDIT:
Once you have connection logic in a TestFixtureSetup, you can still test connection like this:
[Test]
public void Test_Connection()
{
Assert.IsTrue(connectionOk);
}
One downside to this is that the Exercise step of the test is implicit--it is part of the setup logic. IMHO, that's OK.