I have a collection of 30+ tests in the one file and I'm running them using MSTest in Visual Studio as individual tests. I noticed an unusual bug today however. It seems that when I run a test on it's own all other tests that share the same root name as that test also run.
For example if i run TestAddClaim then TestAddClaimWrongID and TestAddClaimWrong EnrollmentAction also run. Similarly if I run TestAddGroupContact then after that has completed TestAddGroupContactWrongGroupID also runs.
Is this a known issue?
I'm going to pre-pend numbers to all of my tests as a way of preventing this from happening. Is there a better solution?
This is by-design so that you can run multiple tests with one command. the downside is that you end up unintentionally running multiple tests with one command. See https://msdn.microsoft.com/en-us/library/ms182489.aspx
I also ran into this, but my tests are on different test lists and have different test traits so I used "/category" to limit the tests run to those containing certain Traits. You can also use "/testlist" to specify only tests from certain ordered test lists.
Related
How can I order my scenarios to run in a given order?
I have a series of scenarios that depend on the previous scenario having run.
I'm running my specflow tests using nunit3console.
I haven't found anything online that seems to work.
And yes my tests do need to run in a particular order otherwise it's pointless.
Scenario: I perform first scenario
Scenario: I perform second scenario
NUnit by default runs tests in alphabhetical order. As of NUnit 3 you can try running these in order, reference here.
It is not considered good practice to limit scenarios to run in a particular order though as each and every one of them sould be able to run independently of one another.
Other posts with similar information, can be found here and here.
I recently starting using NUnit to do integration testing for my project. It's a great tool, but I've found one drawback that I cannot seem to get the answer to. All my integration tests use the TestCaseSource attribute and specify a test case source name for each test. Now the problem is that preparing these test case sources takes quite some time (~1 min.) and if I'm running a single test, NUnit always loads EVERY SINGLE test case source, even if it's not a test case source for the test that I'm running.
Can this behavior be changed so that only the test case source(s) for the test I'm running load? I want to avoid creating new assemblies every time I want to create a new test (seems rather superfluous and cumbersome, not to mention, hard to maintain), since I've read that tests in different assemblies are loaded separately, but I don't know about the test case sources. It's worth mentioning that I'm using Resharper as the test runner.
TL;DR: Need to tell NUnit to only load the TestCaseSources that are needed for the tests running in the current session. Current behavior is that ALL TestCaseSources are loaded for any test that is run.
Could you do this by moving your sources instantiation to a helper method and call them in the setup methods for each set of tests?
I often have a set of helper methods in my integration test suite that set up shared data for different tests.
I call just the helper methods that I need for the current suite in the [Setup]
I am using Visual Studio 2008 and I would like to be able to split up my unit tests into two groups:
Quick tests
Longer tests (i.e. interactions with database)
I can only see an option to run all or one, and also to run all of the tests in a unit test class.
Is there any way I can split these up or specify which tests to run when I want to run a quick test?
Thanks
If you're using NUnit, you could use the CategoryAttribute.
The equivalent in MSTest is the TestCategory attribute - see here for a description of how to use it.
I would distinguish your unit test groups as follows:
Unit Tests - Testing single methods / classes, with stubbed dependenices. Should be very quick to execute, as there are only internal dependencies.
Integration Tests - Testing two or more components together, such as your Data Access classes with an actual backed database. These are generally lengthy, as you may be dealing with an external dependency such as a DB or web service. However, these could still be quick tests depending on what components you are integrating. The key here is the scope of the test is different than Unit Tests.
I would create seperate test libraries, i.e. MyProj.UniTests.dll and MyProj.IntegrationTests.dll. This way, your Unit Tests library would have fewer dependenices than your Integration tests. It will then be easy to specify which test group you want to run.
You can set up a continuous integration server, if you are using something like that, to run the tests at different times, knowing that group 1 is quicker than the second. For example, Unit Tests could run immediatley after code is checked in to your repository, and Integration Tests could run overnight. It's easy to set something like this up using Team City
There is the Test List Editor. I'm not at my Visual Studio computer now so I'll just point to this answer.
I keep wondering what are Contexts when it comes to Unit testing. There seem to be 3 options for doing tests in Visual Studio:
All Tests in the Current Context
All Tests in Solution
All Impacted Tests
Point 2) is quite obvious to me, but I don't understand what Points 1) and 2) mean.
Thanks
All Tests in the Current Context: The current context depends on where your cursor is. If it's in a method, that test method will be run. If it's in a class, but not in a method, all test methods in the class will be run
All Tests in Solution: Runs all tests
All Impacted Tests: Visual Studio figures out which test methods need to be run to test any changes you've made in your code. It runs only those tests that test the changed code. The main benefit of this feature is when you have a large number of test methods, you don't need to run the entire set of tests, which can take a while. You can read more about this here: Link
Tests in the Current Context : This option works if your cursor is inside a test method and if selected, would run test inside the boundaries of that particular method only.
All Tests in Solution :
If your cursor is outside a method, selecting this option will run whole tests in you test class(es).
All Impacted Tests :
Not sure about that as I switched to NUnit at very early days of Unit testing. My instance of Visual studio 2008 is not showing this option either so that I could check how this will behave. Would love to know any way.
hope it helps
I believe "Impacted Tests" is a new feature of VS2010. It will run the tests "impacted" by recent changes to your code. That is, it will look at what the tests seem to test, and if you have made changes to the code that they test, then that will be an impacted test.
So I have been running into all kinds of interesting problems in VisualStudio 2008 when running Unit Tests.
Such as, when running Visual Studio Unit Tests some tests are failing together but passing individually. This is happening because some class level variables in that test class is being reused in the Unit Tests.
Now normally I would go into each class and fix this problem manually! However, we are talking about tests that range in the thousands!
Now here comes the interesting dilemma, using both ReSharper Unit Tests and TFS BuildServer they are passing together!
Is there any way that I could configure the VS Unit Test Solution to run in the same fashion? I want to avoid calling the [TestInitialize] methods in the [TestCleanup] methods.
This is usually a byproduct of differently ordered tests. ReSharper 4.x and earlier runs unit tests based on the order they appear in the source file. Almost all other unit test runners run tests in alphabetical order. This different ordering can (but never should) affect whether or not tests pass/fail (based on left over data in a database or statics).
ReSharper 5.0 is not using a custom runner anymore so it should fix these inconsistencies.
However, this type of inconsistency indicates a problem in the tests. Some are leaving data behind that they should be cleaning up and some are dependent on, or hurt by, data left over from a previous test.