Exception thrown in OneTimeTearDown; Tests not marked as failure - c#

Title says it all ...
I'm working in C#, in Visual Studio, with NUnit, and with ReSharper as my TestRunner
I have a unit test with a [OneTimeTearDown] method.
That method is throwing an exception at the moment.
The test appears to be marked as 'inconclusive', rather than failed.
This seems a bit rubbish :(
Is there a way to fix this, or is it just how the framework works?

It's a combination of how the framework works and how the runner is reporting the result. At the point in time when OneTimeTearDown fails, all the tests have already been reported to the runner as succeeding, through test completion events.
So, those tests did run successfully but something went wrong in cleaning up the fixture. That error is reported against the fixture. Some runners may show this information and some may not. If you are running under the Test Explorer in Visual Studio, you'll notice that there is no info shown for fixtures, only for individual tests. So the runner, if it wants to report the failure to you, has no place to do it except possibly as text in the output window.
As an experiment, you might try running your tests under nunit3-console to see how it handles the result. You could also try using the NUnit 3 VS adapter without ReSharper to see how that comes out. Then pick the approach you like best and/or file an issue with the developers of the particular runner.
PS: If you run under nunit3-console, you can examine the XML result file to see what info is reported to any runner.
Sorry that this is not a more immediately useful answer!

Related

How to get nunit runners generated nunit-report.xml file in teardown

I currently run a suite of tests and in the TestFixtureTearDown I look for the "nunit-report.xml" and generate a webpage with the results. The problem is that this file will not be generated by nunit until after it has ran through the teardown. Is there any way that I can terminate nunit in code using c# in the tear down?
NUnit generates it's XML output after the tests are finished. That makes sense, since the info to generate the report is not available till after the tests have all run.
What you may not be realizing is that your TestFixtureTearDown is part of your test. For example, even if everything was OK up to that point, the TestFixtureTearDown might throw an exception and give you an error result. And of course you may have more than one fixture that has still to run at the time TestFixtureTearDown completes for a particular fixture.
The above is true in all versions of NUnit. What you may do about it varies between NUnit V2 and NUnit 3. Sincde you use the term TestFixtureTearDown, I'll start by assuming an older version of NUnit...
In NUnit V2, the XML output is generated by the console runner itself. It's not available until after the runner exits. If you want to write code to generate a report from it, you have to write a separate program, which you run after the test completes.
In NUnit 3, the output is created by the engine but still on request of the console runner. The difference is that NUnit 3 supports extensions to the engine. You can write code as a report writer extension and invoke it from the command-line. With the proper extension, written by you, you might for example invoke NUnit with nunit3-console mytest.dll --result:myreport.html;format:myhtml
NUnit3 also gives you the ability to create a new output format based on an xslt transform if that sort of thing appeals to you. :-)

SpecFlow with NUnit: SetUp method runs twice

I installed SpecFlow and SpecRun recently on top of NUnit. I had a bit of trouble with references and Nuget packages but finally my tests run again. But this time whenever I run test (SpecFlow feature) my TestBase [SetUp] method once it reaches end runs again, resulting with opening browser window again. Test runs to the end with second attempt. Any one had similar problem?
I was checking solutions that point to PDB files as I see this popping up in Debug window but non seemed to work. Also, in Immediate Window I see this: Step into: Stepping over non-user code
I'm running test under recent version of SpecFlow v2.1.0 and NUnit3.21 against WebDriver v2.53.
For future reference. NUnit and SpecFlow hooks are mutually exclusive. Make sure you run your tests with attributes specific for providers you want to run test with.

Webdriver, use "AfterScenario" to rerun failed test before report is generated

Is it possible to rerun the current test that has just failed, during the "AfterScenario" phase? As far as I know, a Console.WriteLine during the AfterScenario appears in the report, so the report is generated after the "AfterScenario".
The ScenarioContext class can detect if the Scenario has failed. Maybe it can also launch a scenario? Any other mean to do this is also acceptable.
Edit:
To go back to the "BeforeScenario" would probably also work.
I want to say that it isn't possible, but like many things in IT you might be able force it work if you apply enough pressure.
So instead I'll say SpecFlow certainly isn't designed to support this. Consider all the pieces that are normally involved in running tests,
The host, such as Visual Studio, Build process (such as a TeamCity Agent) or just a Cmd line. These all kick off the tests by invoking;
A test runner. It could be the VS or Resharper test runner, TeamCity Nunit runner or even just nunit.console.exe (or msTest equivalent). This loads up your test dll and scans them for the nunit/mstest [Test] \ [Setup] \ [TearDown] attributes.
The test dll contains the code that the SpecFlow plugin generated and maps the nUnit attributes to SpecFlow Attributes.
I think you are only considering that last layer and not the ones on top. So even if you somehow manage to convince SpecFlow and nUnit/msTest to run those tests again, I think you will just end up with errors all the way through the process as VisualStudio, Resharper, TeamCity Agents or CmdLine fail to parse the same tests running twice.
Alternatively why don't you look at optimised test runners to solve your problem.
The TeamCity nUnit runner, can run tests that failed last time, first, so you know very quickly if you are going to fail again.
NCrunch allows you to choose engine modes, so that you might only run tests that failed previously, or tests that are affected by your code change or both (via a custom rule).
If you are using SpecFlow+ Runner (AKA SpecRun) you can set a retry on failure for your specflow tests and then the runner will rerun any failed tests. SpecFlow+ Runner is a commercial product although there is a evaluation version available.
I 100% agree with everything that Alski said, in that I feel you are going to be swimming against the tide trying to bend specflow to your will in this way.
However I also want to offer an alternative solution that I would try if I was faced with this problem. I would probably check for test failures in the AfterScenario and then write out a file with the name of the test that failed. Then after the test run, if the file existed then I would use it to rerun the individual failed tests again.
But TeamCity would still be my first choice for doing this.

Tests succeed when run from Test View but fail when run from test list editor or command line

I have a test suite, comprised of both unit tests and integration tests, in a project using C# on .NET 4.0 with Visual Studio 2010. The test suite uses MSTest. When I run all tests in solution (either by hitting the button in the testing toolbar or using the Ctrl-R A shortcut chord) all of the tests, integration and unit, pass successfully.
When I either attempt to run the same tests from the command line with mstest (explicitly using the only .testsettings file present) or attempt to run them from the Test List Editor or using the .vsmdi file the integration tests fail.
The integration tests test the UI and so have dependencies on deployment items and such, whereas the unit tests do not. However, I cannot seem to pin down what is actually different between these two methods of running the tests.
When I inspect the appropriate Out directories from the test run, not all of the files are present.
What would cause some of the files that deploy correctly in one situation from Visual Studio to not deploy correctly in another?
The static content started being copied shortly after I wrote the comments above. The other major issue I ran into was that the integration test project referenced libraries that were dependencies of the system-under-test (with copy-local set to true) in order to ensure that the DLLs would be present when they were needed. For some reason, these stubbornly refused to copy when the tests were run through Test List or mstest.
What I eventually did to work around it was include [DeploymentItem] attributes for the DLLs that I needed. This got things working no matter how the tests were run. What I am still unclear on, that may have answered the underlying solution, or provided a better solution, is how Test View/mstest differ from the regular test runner (assuming that the correct .settings file was passed to mstest.). I'm putting these notes/workarounds in an answer, but I'll leave the question open in case anyone can address the underlying cause for how the different test execution paths differ.

How to debug unittest failures in Visual C# 2008?

This is nearly my first experience with unit testing.
I added a unittest to my solution, and selected Test->Run->All Tests in Solution. My test failed due to an exception which got thrown in the tested code.
Normally, I would then go to the stacktrace toolwindow, click my way through it, looking at the values of locals in every stackframe, and figure out what went wrong. But when code fails within an unittest, I don't get the normal "yellow balloon" exception notification, and I'm not able to explore the stacktrace in detail. All I get is a "TestMethod1 [Results]" tab, which displays only the exception message and a plaintext stacktrace. So, no access to the values of locals, no access to any debug-output I may have printed to the console...
How am I supposed to debug it then?
You need to select "Test->Debug->All tests in solution" then the debugger works as normal.
All the normal debug windows are available by going to "Debug->Windows".
You can install TestDriven.NET, which is a Visual Studio add-in that allows you to do just that - debug your tests. There is a free community version.
You can put a breakpoint in your code, like this:
<TestMethod> _
Public Sub Test() <--- Put breakpoint here.
and then choose to debug the unit test, you can then step through the code.

Categories

Resources