SpecFlow with NUnit: SetUp method runs twice - c#

I installed SpecFlow and SpecRun recently on top of NUnit. I had a bit of trouble with references and Nuget packages but finally my tests run again. But this time whenever I run test (SpecFlow feature) my TestBase [SetUp] method once it reaches end runs again, resulting with opening browser window again. Test runs to the end with second attempt. Any one had similar problem?
I was checking solutions that point to PDB files as I see this popping up in Debug window but non seemed to work. Also, in Immediate Window I see this: Step into: Stepping over non-user code
I'm running test under recent version of SpecFlow v2.1.0 and NUnit3.21 against WebDriver v2.53.

For future reference. NUnit and SpecFlow hooks are mutually exclusive. Make sure you run your tests with attributes specific for providers you want to run test with.

Related

Exception thrown in OneTimeTearDown; Tests not marked as failure

Title says it all ...
I'm working in C#, in Visual Studio, with NUnit, and with ReSharper as my TestRunner
I have a unit test with a [OneTimeTearDown] method.
That method is throwing an exception at the moment.
The test appears to be marked as 'inconclusive', rather than failed.
This seems a bit rubbish :(
Is there a way to fix this, or is it just how the framework works?
It's a combination of how the framework works and how the runner is reporting the result. At the point in time when OneTimeTearDown fails, all the tests have already been reported to the runner as succeeding, through test completion events.
So, those tests did run successfully but something went wrong in cleaning up the fixture. That error is reported against the fixture. Some runners may show this information and some may not. If you are running under the Test Explorer in Visual Studio, you'll notice that there is no info shown for fixtures, only for individual tests. So the runner, if it wants to report the failure to you, has no place to do it except possibly as text in the output window.
As an experiment, you might try running your tests under nunit3-console to see how it handles the result. You could also try using the NUnit 3 VS adapter without ReSharper to see how that comes out. Then pick the approach you like best and/or file an issue with the developers of the particular runner.
PS: If you run under nunit3-console, you can examine the XML result file to see what info is reported to any runner.
Sorry that this is not a more immediately useful answer!

Xamarin - Cross Platform Unit Testing #2

This is a common question, it's really quite astounding to me how difficult such a simple task is turning out to be ...
I have a Visual Studio (formerly Xamarin) C# cross-platform library. It has some code in the shared "abstract" library, and a few machine-specific implementation classes for Android, iOS, and UWP libraries. It also has a few automated (non-UI) unit tests that use the Microsoft.VisualStudio.TestTools.UnitTesting package. This is a great start, when Visual Studio 2017 is feeling generous it runs my unit tests and life is good.
Now I want to run the unit tests on simulators of the actual devices, from a command line (so I can run them from Jenkins and capture the output). So, for example, I want to run my unit tests on a simulator of an iPad. From various web sites I get the feeling that I can't use the Microsoft package for this, so I need to switch to either Xunit or NUnit. I gave up on Xunit earlier today, here's how NUnit went.
Go into the Unit Test project and remove the NuGet packages for Microsoft.NET.Test.Sdk, MSTest.TestFramework, and MSTest.TestAdapter.
Add NUnit instead (version 3.7.0).
Go into each CS file, change to "using NUnit.Framework", change [TestMethod] to [Test], and [TestClass] to [TestFixture].
Project:Manage NuGet Packages, add the NUnit3TestAdapter.
Test Explorer: Run All
At this point I get this weird error for the UWP machine-specific library:
The "XamlCTask" task could not be initialized with its input parameters
The "DebugType" parameter is not supported by the "XamlCTask" task. Verify the parameter exists on the task, and it is a settable public instance property.
That's a really strange message, and if I Rebuild the UWP library, it rebuilds without errors. Not sure why it's building the UWP library as part of the Unit Tests, the only thing listed under "Projects" is the abstract (non-machine-specific) library.
Also strange that building the Unit Test library works, it's only when I do the Test Explorer: Run All that I get these errors.
OK fine, it's a known error.
Close VS2017, delete bin and obj folders, reopen, no better. Close VS2017, hack the Xamarin.form.targets as listed. Same error. Realize error is in the global Xamarin.Forms.targets, not the one in my solution. Hack it. Close & reopen solution, rebuild. No error this time!
Test Explorer: Run all. Nothing. Output:Build shows no failures, Output:Tests is completely empty. Close & reopen solution. Get new errors:
[Failure] Could not find file 'C:\Workspace\PopComm.iOS\obj\Debug\TemporaryGeneratedFile_5937a670-0e60-4077-877b-f7221da3dda1.cs'.
[Failure] Could not find file 'C:\Workspace\PopComm.iOS\obj\Debug\TemporaryGeneratedFile_E7A71F73-0F8D-4B9B-B56E-8E70B10BC5D3.cs'.
[Failure] Could not find file 'C:\Workspace\PopComm.iOS\obj\Debug\TemporaryGeneratedFile_036C0B5B-1481-4323-8D20-8F5ADCB23D92.cs'.
Test Explorer: Run all. This time get more interesting errors:
------ Discover test started ------
NUnit Adapter 3.7.0.0: Test discovery starting
Assembly contains no NUnit 3.0 tests:
C:\Workspace\PopComm.iOS\bin\iPhone\Debug\PopComm.dll
NUnit Adapter 3.7.0.0: Test discovery complete
========== Discover test finished: 0 found (0:00:00.0530024) ==========
Of course that assembly doesn't contain any unit tests! It's not my unit test project!
Sigh.
OK, create a new project of type "NUnit 3 Unit Test Project". Move my test classes into this new project and add a dependency to my non-machine-specific library.
Test Explorer: Run all - Tests! There are tests! And they even run successfully.
Close VS2017, open again, Test Explorer: Run all. No tests. Output:Tests is empty.
Close project, delete all bin and obj folders, Rebuild Solution, Test Explorer: Run All. No tests.
Push code to git, pull on the Mac.
Open solution in Visual Studio for the Mac.
Use the mostly-hidden View:Test feature to bring up the Unit Tests window, run the tests. Tests found and run. Yippee!
Now I want to run my unit tests on an iPad simulator, not inside VSMac.
Add a new project to my solution, type iOS:Tests:Unified API:Unit Test App.
Wizard asks what is the minimum iOS version I want to support. I figure I should match the iOS version that my machine-specific library targets, so I go looking in the VSMac options for the project. The iOS target version isn't shown anywhere. Use the default (10.3) for the Unit Test App.
Do I have to copy all of my unit test CS files into this new app now? Or can I somehow reference my cross-platform unit test project in this Unit Test App?
Pushing everything and pulling on the Windows machine, I try adding a reference to my unit test library but get an error:
The project 'NUnit.Tests' cannot be referenced. The referenced project is targeted to a different framework family (.NETFramwork)
Hum. I wonder what framework the new NUnit.Tests.iOS application is targeting. Check its options - it doesn't say anywhere. Remove the reference from the Unit Test App back to the unit test files, and just copy the unit test source files over (non-optimal).
Run the application in the iPad simulator, it finds & runs the tests, but I don't know how to (1) run it from the command line, and (2) capture the unit test statuses in a file.
So after all this, I still don't know how to run my unit tests (1) from the command line, (2) on an iPad simulator, and (3) capturing the output in a text file.

Webdriver, use "AfterScenario" to rerun failed test before report is generated

Is it possible to rerun the current test that has just failed, during the "AfterScenario" phase? As far as I know, a Console.WriteLine during the AfterScenario appears in the report, so the report is generated after the "AfterScenario".
The ScenarioContext class can detect if the Scenario has failed. Maybe it can also launch a scenario? Any other mean to do this is also acceptable.
Edit:
To go back to the "BeforeScenario" would probably also work.
I want to say that it isn't possible, but like many things in IT you might be able force it work if you apply enough pressure.
So instead I'll say SpecFlow certainly isn't designed to support this. Consider all the pieces that are normally involved in running tests,
The host, such as Visual Studio, Build process (such as a TeamCity Agent) or just a Cmd line. These all kick off the tests by invoking;
A test runner. It could be the VS or Resharper test runner, TeamCity Nunit runner or even just nunit.console.exe (or msTest equivalent). This loads up your test dll and scans them for the nunit/mstest [Test] \ [Setup] \ [TearDown] attributes.
The test dll contains the code that the SpecFlow plugin generated and maps the nUnit attributes to SpecFlow Attributes.
I think you are only considering that last layer and not the ones on top. So even if you somehow manage to convince SpecFlow and nUnit/msTest to run those tests again, I think you will just end up with errors all the way through the process as VisualStudio, Resharper, TeamCity Agents or CmdLine fail to parse the same tests running twice.
Alternatively why don't you look at optimised test runners to solve your problem.
The TeamCity nUnit runner, can run tests that failed last time, first, so you know very quickly if you are going to fail again.
NCrunch allows you to choose engine modes, so that you might only run tests that failed previously, or tests that are affected by your code change or both (via a custom rule).
If you are using SpecFlow+ Runner (AKA SpecRun) you can set a retry on failure for your specflow tests and then the runner will rerun any failed tests. SpecFlow+ Runner is a commercial product although there is a evaluation version available.
I 100% agree with everything that Alski said, in that I feel you are going to be swimming against the tide trying to bend specflow to your will in this way.
However I also want to offer an alternative solution that I would try if I was faced with this problem. I would probably check for test failures in the AfterScenario and then write out a file with the name of the test that failed. Then after the test run, if the file existed then I would use it to rerun the individual failed tests again.
But TeamCity would still be my first choice for doing this.

Tests succeed when run from Test View but fail when run from test list editor or command line

I have a test suite, comprised of both unit tests and integration tests, in a project using C# on .NET 4.0 with Visual Studio 2010. The test suite uses MSTest. When I run all tests in solution (either by hitting the button in the testing toolbar or using the Ctrl-R A shortcut chord) all of the tests, integration and unit, pass successfully.
When I either attempt to run the same tests from the command line with mstest (explicitly using the only .testsettings file present) or attempt to run them from the Test List Editor or using the .vsmdi file the integration tests fail.
The integration tests test the UI and so have dependencies on deployment items and such, whereas the unit tests do not. However, I cannot seem to pin down what is actually different between these two methods of running the tests.
When I inspect the appropriate Out directories from the test run, not all of the files are present.
What would cause some of the files that deploy correctly in one situation from Visual Studio to not deploy correctly in another?
The static content started being copied shortly after I wrote the comments above. The other major issue I ran into was that the integration test project referenced libraries that were dependencies of the system-under-test (with copy-local set to true) in order to ensure that the DLLs would be present when they were needed. For some reason, these stubbornly refused to copy when the tests were run through Test List or mstest.
What I eventually did to work around it was include [DeploymentItem] attributes for the DLLs that I needed. This got things working no matter how the tests were run. What I am still unclear on, that may have answered the underlying solution, or provided a better solution, is how Test View/mstest differ from the regular test runner (assuming that the correct .settings file was passed to mstest.). I'm putting these notes/workarounds in an answer, but I'll leave the question open in case anyone can address the underlying cause for how the different test execution paths differ.

Prevent MSTest from copying / deploying every dll

When running MSTest from Visual Studio - the unit test execution time is relatively quick.
When running MSTest from the command line, with /testsettings flag - the execution takes forever and that is because it spends 95% of its startup time copying the dll's to its Out folder. Is there a way to prevent this?
The default Local.testsettings in the project has no modifications to it (which also means it is empty). However, if I try to use that same file from the command line, MSTest complains about missing DLL's that the Unit Test reference.
Have you tried disabling deployment in the test settings? When it is disabled, the tests should be run in-place rather than on copied assemblies. (See http://msdn.microsoft.com/en-us/library/ms182475.aspx for details.)
try MSTest.exe /noisolation http://msdn.microsoft.com/en-US/library/ms182489.aspx

Categories

Resources