What are Contexts in MSTests? - c#

I keep wondering what are Contexts when it comes to Unit testing. There seem to be 3 options for doing tests in Visual Studio:
All Tests in the Current Context
All Tests in Solution
All Impacted Tests
Point 2) is quite obvious to me, but I don't understand what Points 1) and 2) mean.
Thanks

All Tests in the Current Context: The current context depends on where your cursor is. If it's in a method, that test method will be run. If it's in a class, but not in a method, all test methods in the class will be run
All Tests in Solution: Runs all tests
All Impacted Tests: Visual Studio figures out which test methods need to be run to test any changes you've made in your code. It runs only those tests that test the changed code. The main benefit of this feature is when you have a large number of test methods, you don't need to run the entire set of tests, which can take a while. You can read more about this here: Link

Tests in the Current Context : This option works if your cursor is inside a test method and if selected, would run test inside the boundaries of that particular method only.
All Tests in Solution :
If your cursor is outside a method, selecting this option will run whole tests in you test class(es).
All Impacted Tests :
Not sure about that as I switched to NUnit at very early days of Unit testing. My instance of Visual studio 2008 is not showing this option either so that I could check how this will behave. Would love to know any way.
hope it helps

I believe "Impacted Tests" is a new feature of VS2010. It will run the tests "impacted" by recent changes to your code. That is, it will look at what the tests seem to test, and if you have made changes to the code that they test, then that will be an impacted test.

Related

MSTest in VS running multiple tests that have similar names

I have a collection of 30+ tests in the one file and I'm running them using MSTest in Visual Studio as individual tests. I noticed an unusual bug today however. It seems that when I run a test on it's own all other tests that share the same root name as that test also run.
For example if i run TestAddClaim then TestAddClaimWrongID and TestAddClaimWrong EnrollmentAction also run. Similarly if I run TestAddGroupContact then after that has completed TestAddGroupContactWrongGroupID also runs.
Is this a known issue?
I'm going to pre-pend numbers to all of my tests as a way of preventing this from happening. Is there a better solution?
This is by-design so that you can run multiple tests with one command. the downside is that you end up unintentionally running multiple tests with one command. See https://msdn.microsoft.com/en-us/library/ms182489.aspx
I also ran into this, but my tests are on different test lists and have different test traits so I used "/category" to limit the tests run to those containing certain Traits. You can also use "/testlist" to specify only tests from certain ordered test lists.

NUnit - Loads ALL TestCaseSources even if they're not required by current test

I recently starting using NUnit to do integration testing for my project. It's a great tool, but I've found one drawback that I cannot seem to get the answer to. All my integration tests use the TestCaseSource attribute and specify a test case source name for each test. Now the problem is that preparing these test case sources takes quite some time (~1 min.) and if I'm running a single test, NUnit always loads EVERY SINGLE test case source, even if it's not a test case source for the test that I'm running.
Can this behavior be changed so that only the test case source(s) for the test I'm running load? I want to avoid creating new assemblies every time I want to create a new test (seems rather superfluous and cumbersome, not to mention, hard to maintain), since I've read that tests in different assemblies are loaded separately, but I don't know about the test case sources. It's worth mentioning that I'm using Resharper as the test runner.
TL;DR: Need to tell NUnit to only load the TestCaseSources that are needed for the tests running in the current session. Current behavior is that ALL TestCaseSources are loaded for any test that is run.
Could you do this by moving your sources instantiation to a helper method and call them in the setup methods for each set of tests?
I often have a set of helper methods in my integration test suite that set up shared data for different tests.
I call just the helper methods that I need for the current suite in the [Setup]

Mapping Unit Tests to methods

We're using Microsoft's Unit Test program and we use the Unit Test Wizard to create one-to-one mapping for methods in each class from the business layer. The issue is the amount of work needed go through and determine if we are missing any tests after the initial tests were created.
Currently I have to run the wizard and look for tests that have a "1" appended to the default name [method][test]. Those with that name mean we have already have a test for that method. The ones without an append 1 mean those are methods that don't have a Unit Test that follow the default naming convention.
I'm wondering if there is away to map Unit Test to a method with attribute on the Method so it doesn't take as much work. And yes, I know if we were following TDD we would write the Unit Test first. We write the test in parallel to development (but sometimes in rush it is missed).
If you are using Visual Studio 2012 and have the appropriate version, it has proper code coverage analysis built in: "Run tests with code coverage".
Otherwise, you can use a diagnostic tool to run code coverage, such as NCover. You can do this from inside Visual Studio using TestDriven.net

How to test if class works properly in C#?

I've written a class and want to test if it works well. For now I think the best way to do it is to create new console application referencing main project, then make new instance of my class and mess with it. This approach unlike others enables IntelliSense, using keywords (no full names for classes) and Debugging.
Anyone knows how to do it in more convenient way to do this without making new console app?
Using a console app to test your class is what I would call a "poor man's unit test."
You are on the right track in wanting to do this sort of testing and I (and most others on SO) would suggest using a unit testing framework of some sort to help you out. (Mocking may be important and useful to you as well.)
Here's the thing though. Regardless of what framework you use, or if you go with a console app to test your code, you do have to create a separate project, or a separate, significant chunk of code of some sort, to be able to execute tests properly and independently. That's just part of the process. It is an investment but don't let the extra work keep you from doing it. It will save a lot time, and your skin, a little while in the future. Maybe even next week.
Also, while you're looking up unit testing make sure to also study up on test-driven development (TDD.)
Unit testing is absolutely the way to go. Depending on what version of VS you are running, there may be unit testing functionality built in, or you may have to use an additional tool such as NUnit. Both options are good and will allow you to fully test your classes.
Bear in mind also, that a comprehensive suite of unit tests will make refactoring much easier in the long run as well. If you refactor and break your unit tests, you know you've made a boo-boo somewhere. :)
Unit testing is the way forward here> this is a good introductory article.
The basic concept of a unit test is that you isolate and invoke a specific portion of code and assert that the results are expected and within reason. For example lets say you have a simple method to return the square of a floating point number:
public float Square(float value)
{
return value * value;
}
A reasonable unit test would be that the method returns the correct value, handles negetive values etc:
Assert.AreEqual(25, Square(5));
Assert.AreEqual(100, Square(-10));
Unit tests are also a good way to see how your code handles edge cases:
Assert.Throws<OverflowException>(Square(float.MaxValue));
If you are using VS 2010, check out Pex and Moles...
http://research.microsoft.com/en-us/projects/pex/
The Console App approach is more of a test harness for your class, which is fine.
But you can also use a unit testing framework to test your class. Visual Studio has one built in or you can leverage something like NUnit.
Also, try the Object Test Bench in Visual Studio. It should allow you to create a new instance, modify and view properties, and call some methods. It usually only works with very simple apps, though.
If you use Visual Studio 2008 or higher you will be able to test your code using MSTest framework:
1.Open Test View window: Test/Windows/Test View;
2.Add new unit test project: right click on Solution in Solution Explorer/Add/New
Project/Test Project;
3.Remove all files apart from UnitTest.cs file in created test project;
4.Write your unit test in method under [TestMethod] attribute:
[TestClass]
public class UnitTest1
{
[TestMethod]
public void TestMethod1()
{
var ranges = new Ranges();
int sum = ranges.CountOccurrences(11);
Assert.AreEqual(128, sum);
}
}
5.Run your test from Test View window added in p.1
6.See test results in Test/Windows/Test Results window

Run Visual Studio Unit Tests vs Run ReSharper Unit Tests, differences?

So I have been running into all kinds of interesting problems in VisualStudio 2008 when running Unit Tests.
Such as, when running Visual Studio Unit Tests some tests are failing together but passing individually. This is happening because some class level variables in that test class is being reused in the Unit Tests.
Now normally I would go into each class and fix this problem manually! However, we are talking about tests that range in the thousands!
Now here comes the interesting dilemma, using both ReSharper Unit Tests and TFS BuildServer they are passing together!
Is there any way that I could configure the VS Unit Test Solution to run in the same fashion? I want to avoid calling the [TestInitialize] methods in the [TestCleanup] methods.
This is usually a byproduct of differently ordered tests. ReSharper 4.x and earlier runs unit tests based on the order they appear in the source file. Almost all other unit test runners run tests in alphabetical order. This different ordering can (but never should) affect whether or not tests pass/fail (based on left over data in a database or statics).
ReSharper 5.0 is not using a custom runner anymore so it should fix these inconsistencies.
However, this type of inconsistency indicates a problem in the tests. Some are leaving data behind that they should be cleaning up and some are dependent on, or hurt by, data left over from a previous test.

Categories

Resources