my question is how can I run a single test on a single TestFixture at a time(for debugging for example).
I have several TestFixtures in each Test Category, and whenever I click on the R# icon on a single test, it says 'Run'/'Debug', however, even when selecting the exact test and the fixture, R# and NUnit runs all Fixtures one after the other.
[Category("LoginTestSuite")]
[TestFixture(SiteEditionsEnum.AsiaPacific)]
[TestFixture(SiteEditionsEnum.Australia)]
[TestFixture(SiteEditionsEnum.Canada)]
[TestFixture(SiteEditionsEnum.CanadaFrench)]
[TestFixture(SiteEditionsEnum.France)]
[TestFixture(SiteEditionsEnum.Germany)]
[TestFixture(SiteEditionsEnum.HongKong)]
[TestFixture(SiteEditionsEnum.Japan)]
[TestFixture(SiteEditionsEnum.Spain)]
[TestFixture(SiteEditionsEnum.UnitedKingdom)]
[TestFixture(SiteEditionsEnum.UnitedStates)]
public class LoginTestSuite : FrontEndTestSuitesCommon
{
[...]
[Test]
public void RunLoginFunctionalTest()
{
Logger.Log(MessageType.None, "This test case is using the email address: " + ConfigurationManager.AppSettings["DefaultLoginEmail"], LogLevel.Info);
Actions.Login.GetToLoginPage();
Actions.Login.SetLoginCredentials();
After the click all Fixtures start running(i.e. all siteEditions)
The menu seems to say it will run or debug only that test with UK TestFixtures, but it doesn't, it runs all fixtures instead.
I use VS2008 SP1, ReSharper 7.0.97.60 with the built-in NUnit 2.6
First off I think there is a fundamental flaw in your test set up, your trying to make your one test do too much. One of the primary concepts of unit testing is "single responsibility", having verbose tests isn't a bad thing, and having a lot of tests isn't a bad thing.
My suggestion would be to break up your tests into a class for each TestFixture or could you create a test for each TestFixture Item? it's not about Resharper not running your tests wrong, it's how you have written your tests isn't matching how resharper was designed to work.
Related
We're using Microsoft's Unit Test program and we use the Unit Test Wizard to create one-to-one mapping for methods in each class from the business layer. The issue is the amount of work needed go through and determine if we are missing any tests after the initial tests were created.
Currently I have to run the wizard and look for tests that have a "1" appended to the default name [method][test]. Those with that name mean we have already have a test for that method. The ones without an append 1 mean those are methods that don't have a Unit Test that follow the default naming convention.
I'm wondering if there is away to map Unit Test to a method with attribute on the Method so it doesn't take as much work. And yes, I know if we were following TDD we would write the Unit Test first. We write the test in parallel to development (but sometimes in rush it is missed).
If you are using Visual Studio 2012 and have the appropriate version, it has proper code coverage analysis built in: "Run tests with code coverage".
Otherwise, you can use a diagnostic tool to run code coverage, such as NCover. You can do this from inside Visual Studio using TestDriven.net
I am using Visual Studio 2008 and I would like to be able to split up my unit tests into two groups:
Quick tests
Longer tests (i.e. interactions with database)
I can only see an option to run all or one, and also to run all of the tests in a unit test class.
Is there any way I can split these up or specify which tests to run when I want to run a quick test?
Thanks
If you're using NUnit, you could use the CategoryAttribute.
The equivalent in MSTest is the TestCategory attribute - see here for a description of how to use it.
I would distinguish your unit test groups as follows:
Unit Tests - Testing single methods / classes, with stubbed dependenices. Should be very quick to execute, as there are only internal dependencies.
Integration Tests - Testing two or more components together, such as your Data Access classes with an actual backed database. These are generally lengthy, as you may be dealing with an external dependency such as a DB or web service. However, these could still be quick tests depending on what components you are integrating. The key here is the scope of the test is different than Unit Tests.
I would create seperate test libraries, i.e. MyProj.UniTests.dll and MyProj.IntegrationTests.dll. This way, your Unit Tests library would have fewer dependenices than your Integration tests. It will then be easy to specify which test group you want to run.
You can set up a continuous integration server, if you are using something like that, to run the tests at different times, knowing that group 1 is quicker than the second. For example, Unit Tests could run immediatley after code is checked in to your repository, and Integration Tests could run overnight. It's easy to set something like this up using Team City
There is the Test List Editor. I'm not at my Visual Studio computer now so I'll just point to this answer.
I keep wondering what are Contexts when it comes to Unit testing. There seem to be 3 options for doing tests in Visual Studio:
All Tests in the Current Context
All Tests in Solution
All Impacted Tests
Point 2) is quite obvious to me, but I don't understand what Points 1) and 2) mean.
Thanks
All Tests in the Current Context: The current context depends on where your cursor is. If it's in a method, that test method will be run. If it's in a class, but not in a method, all test methods in the class will be run
All Tests in Solution: Runs all tests
All Impacted Tests: Visual Studio figures out which test methods need to be run to test any changes you've made in your code. It runs only those tests that test the changed code. The main benefit of this feature is when you have a large number of test methods, you don't need to run the entire set of tests, which can take a while. You can read more about this here: Link
Tests in the Current Context : This option works if your cursor is inside a test method and if selected, would run test inside the boundaries of that particular method only.
All Tests in Solution :
If your cursor is outside a method, selecting this option will run whole tests in you test class(es).
All Impacted Tests :
Not sure about that as I switched to NUnit at very early days of Unit testing. My instance of Visual studio 2008 is not showing this option either so that I could check how this will behave. Would love to know any way.
hope it helps
I believe "Impacted Tests" is a new feature of VS2010. It will run the tests "impacted" by recent changes to your code. That is, it will look at what the tests seem to test, and if you have made changes to the code that they test, then that will be an impacted test.
So I have been running into all kinds of interesting problems in VisualStudio 2008 when running Unit Tests.
Such as, when running Visual Studio Unit Tests some tests are failing together but passing individually. This is happening because some class level variables in that test class is being reused in the Unit Tests.
Now normally I would go into each class and fix this problem manually! However, we are talking about tests that range in the thousands!
Now here comes the interesting dilemma, using both ReSharper Unit Tests and TFS BuildServer they are passing together!
Is there any way that I could configure the VS Unit Test Solution to run in the same fashion? I want to avoid calling the [TestInitialize] methods in the [TestCleanup] methods.
This is usually a byproduct of differently ordered tests. ReSharper 4.x and earlier runs unit tests based on the order they appear in the source file. Almost all other unit test runners run tests in alphabetical order. This different ordering can (but never should) affect whether or not tests pass/fail (based on left over data in a database or statics).
ReSharper 5.0 is not using a custom runner anymore so it should fix these inconsistencies.
However, this type of inconsistency indicates a problem in the tests. Some are leaving data behind that they should be cleaning up and some are dependent on, or hurt by, data left over from a previous test.
I've got a LOT of tests written for a piece of software (which is a GREAT thing) but it was built essentially as a standalone test in C#. While this works well enough, it suffers from a few shortcomings, not the least of which is that it isn't using a standard testing framework and ends up requiring the person running the test to comment out calls to tests that shouldn't be run (when it isn't desired to run the entire test 'suite'). I'd like to incorporate it into my automated testing process.
I saw that the Test Edition of VS 2008 has the notion of a 'Generic Test' that might do what I want, but we're not in a position to spend the money on that version currently. I recently started using the VS 2008 Pro version.
These test methods follow a familiar pattern:
Do some setup for the test.
Execute the test.
Reset for the next test.
Each of them returns a bool (pass/fail) and a string ref to a fail reason, filled in if it fails.
On the bright side, at least the test methods are consistent.
I am sitting here tonight contemplating the approach I might take tomorrow morning to migrate all this test code to a testing framework and, frankly, I'm not all that excited about the idea of poring over 8-9K lines of test code by hand to do the conversion.
Have you had any experience undertaking such a conversion? Do you have any tips? I think I might be stuck slogging through it all doing global search/replaces and hand-changing the tests.
Any thoughts?
If you use NUnit (which you should), you'll need to create a new test method for each of your current test methods. NUnit uses reflection to query the test class for methods marked with the [Test] attribute, which is how it builds its list of the tests that show up in the UI, and the test classes use the NUnit Assert method to indicate whether they've passed or failed.
It seems to me that if your test methods are as consistent as you say, all of those NUnit methods would look something like this:
[Test]
public void MyTest()
{
string msg;
bool result = OldTestClass.MyTest(out msg);
if (!result)
{
Console.WriteLine(msg);
}
Assert.AreEqual(result, true);
}
Once you get that to work, your next step is to write a program that uses reflection to get all of the test method names on your old test class and produces a .cs file that has an NUnit method for each of your original test methods.
Annoying, maybe, but not extremely painful. And you'll only need to do it once.
You're about to live through the idiom of "An Ounce of Prevention is worth a pound of cure". Its especially true in programming.
You make no mention of NUnit(which I think was bought by Microsoft for 2008, but don't hold me to that). Is there a paticular reason you didn't just use NUnit in the first place?