MSTest v2 Ordered Tests - c#

I am using Visual Studio 2017 Enterprise and MSTest V2. My solution has multiple Unit Test projects. In one project, I have unit tests that are testing the loading of resources from the installation directory. Most test that the resources are loaded correctly, but some delete the resource to confirm that this is handled correctly as well.
The issue that I am having is that the tests run in parallel. Therefore, the tests that remove the resources do this at the same time the tests that are loading the resources are running, and I get failed tests.
I realize I can resolve this by updating my code to send the directory to search, or by running one set of tests and then the next, but I would prefer being able to run all tests at once. It sounds like MSTest v2 is supposed to run sequentially unless otherwise directed to run in parallel, but on my system, this is demonstrably false. It also appears that Ordered Test does not work with v2. Is there a way to get MSTest V2 to run sequentially?

MStest v2 will not support orderedtests
issue
you may have set the parallelization scope in testsettings file or Assembly file
https://www.meziantou.net/mstest-v2-execute-tests-in-parallel.htm
If you remove that it will run sequentially
I would say that you can create a flag and update that from dependent test, check for the flag status before cleaning up resource. may be a dictionary of testname and status, once its done, execute this test or wait for that test to complete. you can implement a custom logic for that.

To make the tests run sequentially, set MaxCpuCount to 1 in your .runsettings file, for more information see: Configure unit tests by using a .runsettings file.

Related

functional testing with TFS

i'm looking to perform functional tests with a special app we build.
that app operates all sorts of embedded functionality and i need to be able to build test cases that perform these actions as a scenario.
we thought we could simply use the TFS API to get info and just write back test runs and their results but it proved to be difficult task to do.
so we researched the "associated automation" feature inside test cases, but it seems that i need a special framework for this. i was told only unit testing frameworks such as xunit nunit and mstest can be integrated.
i need functional testing, scenarios that are more complicated than a unit test.
do u have any ideas? on how can i simply run my own tests and update the TFS with runs that i created?
If your test tool can generate a JUnit, XUnit or TRX compatible file that contains the test results, then the data can be ingested using the "Publish Test Results" task in your build and/or release pipeline.
If you want a wrapper around any executable, then the "Generic Tests" feature of MsTest may also be an option. These configure how to run your executable and then point to a result file for reporting purposes. A sample result file is shown here in the docs.
<?xml version="1.0" encoding="utf-8" ?>
<SummaryResult>
<TestName>ParentTest</TestName>
<TestResult>Passed</TestResult>
<InnerTests>
<InnerTest>
<TestName>InnerTest1</TestName>
<TestResult>Passed</TestResult>
<ErrorMessage>Everything is fine.</ErrorMessage>
<DetailedResultsFile>D:\Documents and Settings\Results.txt</DetailedResultsFile>
</InnerTest>
<InnerTest>
<TestName>InnerTest2</TestName>
<TestResult>Failed</TestResult>
<ErrorMessage>Something went wrong.</ErrorMessage>
<DetailedResultsFile>D:\Documents and Settings\Results.txt</DetailedResultsFile>
</InnerTest>
</InnerTests>
</SummaryResult>
Alternatively, test results can be created directly through the REST API:
First create a test run
Then create test results

How to get nunit runners generated nunit-report.xml file in teardown

I currently run a suite of tests and in the TestFixtureTearDown I look for the "nunit-report.xml" and generate a webpage with the results. The problem is that this file will not be generated by nunit until after it has ran through the teardown. Is there any way that I can terminate nunit in code using c# in the tear down?
NUnit generates it's XML output after the tests are finished. That makes sense, since the info to generate the report is not available till after the tests have all run.
What you may not be realizing is that your TestFixtureTearDown is part of your test. For example, even if everything was OK up to that point, the TestFixtureTearDown might throw an exception and give you an error result. And of course you may have more than one fixture that has still to run at the time TestFixtureTearDown completes for a particular fixture.
The above is true in all versions of NUnit. What you may do about it varies between NUnit V2 and NUnit 3. Sincde you use the term TestFixtureTearDown, I'll start by assuming an older version of NUnit...
In NUnit V2, the XML output is generated by the console runner itself. It's not available until after the runner exits. If you want to write code to generate a report from it, you have to write a separate program, which you run after the test completes.
In NUnit 3, the output is created by the engine but still on request of the console runner. The difference is that NUnit 3 supports extensions to the engine. You can write code as a report writer extension and invoke it from the command-line. With the proper extension, written by you, you might for example invoke NUnit with nunit3-console mytest.dll --result:myreport.html;format:myhtml
NUnit3 also gives you the ability to create a new output format based on an xslt transform if that sort of thing appeals to you. :-)

Webdriver, use "AfterScenario" to rerun failed test before report is generated

Is it possible to rerun the current test that has just failed, during the "AfterScenario" phase? As far as I know, a Console.WriteLine during the AfterScenario appears in the report, so the report is generated after the "AfterScenario".
The ScenarioContext class can detect if the Scenario has failed. Maybe it can also launch a scenario? Any other mean to do this is also acceptable.
Edit:
To go back to the "BeforeScenario" would probably also work.
I want to say that it isn't possible, but like many things in IT you might be able force it work if you apply enough pressure.
So instead I'll say SpecFlow certainly isn't designed to support this. Consider all the pieces that are normally involved in running tests,
The host, such as Visual Studio, Build process (such as a TeamCity Agent) or just a Cmd line. These all kick off the tests by invoking;
A test runner. It could be the VS or Resharper test runner, TeamCity Nunit runner or even just nunit.console.exe (or msTest equivalent). This loads up your test dll and scans them for the nunit/mstest [Test] \ [Setup] \ [TearDown] attributes.
The test dll contains the code that the SpecFlow plugin generated and maps the nUnit attributes to SpecFlow Attributes.
I think you are only considering that last layer and not the ones on top. So even if you somehow manage to convince SpecFlow and nUnit/msTest to run those tests again, I think you will just end up with errors all the way through the process as VisualStudio, Resharper, TeamCity Agents or CmdLine fail to parse the same tests running twice.
Alternatively why don't you look at optimised test runners to solve your problem.
The TeamCity nUnit runner, can run tests that failed last time, first, so you know very quickly if you are going to fail again.
NCrunch allows you to choose engine modes, so that you might only run tests that failed previously, or tests that are affected by your code change or both (via a custom rule).
If you are using SpecFlow+ Runner (AKA SpecRun) you can set a retry on failure for your specflow tests and then the runner will rerun any failed tests. SpecFlow+ Runner is a commercial product although there is a evaluation version available.
I 100% agree with everything that Alski said, in that I feel you are going to be swimming against the tide trying to bend specflow to your will in this way.
However I also want to offer an alternative solution that I would try if I was faced with this problem. I would probably check for test failures in the AfterScenario and then write out a file with the name of the test that failed. Then after the test run, if the file existed then I would use it to rerun the individual failed tests again.
But TeamCity would still be my first choice for doing this.

Tests succeed when run from Test View but fail when run from test list editor or command line

I have a test suite, comprised of both unit tests and integration tests, in a project using C# on .NET 4.0 with Visual Studio 2010. The test suite uses MSTest. When I run all tests in solution (either by hitting the button in the testing toolbar or using the Ctrl-R A shortcut chord) all of the tests, integration and unit, pass successfully.
When I either attempt to run the same tests from the command line with mstest (explicitly using the only .testsettings file present) or attempt to run them from the Test List Editor or using the .vsmdi file the integration tests fail.
The integration tests test the UI and so have dependencies on deployment items and such, whereas the unit tests do not. However, I cannot seem to pin down what is actually different between these two methods of running the tests.
When I inspect the appropriate Out directories from the test run, not all of the files are present.
What would cause some of the files that deploy correctly in one situation from Visual Studio to not deploy correctly in another?
The static content started being copied shortly after I wrote the comments above. The other major issue I ran into was that the integration test project referenced libraries that were dependencies of the system-under-test (with copy-local set to true) in order to ensure that the DLLs would be present when they were needed. For some reason, these stubbornly refused to copy when the tests were run through Test List or mstest.
What I eventually did to work around it was include [DeploymentItem] attributes for the DLLs that I needed. This got things working no matter how the tests were run. What I am still unclear on, that may have answered the underlying solution, or provided a better solution, is how Test View/mstest differ from the regular test runner (assuming that the correct .settings file was passed to mstest.). I'm putting these notes/workarounds in an answer, but I'll leave the question open in case anyone can address the underlying cause for how the different test execution paths differ.

Prevent MSTest from copying / deploying every dll

When running MSTest from Visual Studio - the unit test execution time is relatively quick.
When running MSTest from the command line, with /testsettings flag - the execution takes forever and that is because it spends 95% of its startup time copying the dll's to its Out folder. Is there a way to prevent this?
The default Local.testsettings in the project has no modifications to it (which also means it is empty). However, if I try to use that same file from the command line, MSTest complains about missing DLL's that the Unit Test reference.
Have you tried disabling deployment in the test settings? When it is disabled, the tests should be run in-place rather than on copied assemblies. (See http://msdn.microsoft.com/en-us/library/ms182475.aspx for details.)
try MSTest.exe /noisolation http://msdn.microsoft.com/en-US/library/ms182489.aspx

Categories

Resources