Nunit Selenium Parallel Tests with Values - c#

I'm trying to run the same nunit Test method with different values in parallel. However the second test seems to fail (i think it's trying to use the first instance of the browser;
This is the test;
namespace AutomationProject.Login_Test_Cases
{
[TestFixture]
[Parallelizable(ParallelScope.Children)]
class Login_Test_Cases: BaseTest
{
[Test]
public void LoginPar([Values("skynet" ,"skynet2")] string username)
{
lg.Log_In(username, "password");
}
}
}
This is the baseTest where the browser is set up;
namespace AutomationProject.BaseClasses
{
public class BaseTest
{
public Log_In_Methods lg;
public IWebDriver driver;
[SetUp]
public void StartBrowser()
{
System.Diagnostics.Trace.AutoFlush = true;
ChromeOptions options = new ChromeOptions();
options.AddAdditionalCapability("useAutomationExtension", false);
driver = new ChromeDriver(//path to chrome driver);
lg = new Log_In_Methods(driver);
driver.Manage().Window.Maximize();
driver.Url = "http://login-test.com";
}
I've also added [assembly: Parallelizable(ParallelScope.Children)]
[assembly: LevelOfParallelism(2)] to AssemblyInfo
The second test always seems to fail (the browser does not even get the url)
I can run different classes and tests in parallel with no issues.
Does anyone know if it's possible to run the same test method in parallel with different values?

Does anyone know if it's possible to run the same test method in parallel with different values?
This is absolutely possible. The issue here is that both tests run in parallel on a single instance of the BaseTest class, and thus you only have a lg field which both tests are trying to create/use simultaneously.
Being able to run the two separate tests with two separate BaseTest objects is an open feature request, see here: https://github.com/nunit/nunit/issues/2574
In the meantime, if you were to include your [SetUp] logic within your test method and use local variables, what you're trying to do should work.

Related

Best practice for unit test cases

I am using the xUnit.net test framework and in each unit test I have certain steps which I am doing in each case. I would like to know if there is a way I call this method once before my unit case starts and also call when all unit test cases has been executed.
For example: In the scenario below I have two unit cases and in each case I am creating a local DB, populating it with data and then running my test and once it is done I am calling method to delete the DB. This I am doing in each test case. Instead of multiple creation I would like to create once and populate once and then delete db once all test case has been executed. It is important for me to delete what I have created as the test cases has certain cases which will fail if Database is not created when the tests are executed.
[Fact]
public void UnitCase1()
{
CreateDb();
UploadData();
...//My set of operation to test this case
...//Assert
DeleteDb()
}
[Fact]
public void UnitCase2()
{
CreateDb();
UploadData();
...//My set of operation to test this case
...//Assert
DeleteDb()
}
Editing after Answer from Eric:(I tried but its not working)
public class CosmosDataFixture : IDisposable
{
public static readonly string CosmosEndpoint = "https://localhost:8081";
public static readonly string EmulatorKey = "Mykey";
public static readonly string DatabaseId = "Databasename";
public static readonly string RecordingCollection = "collectionName";
string Root = Directory.GetParent( Directory.GetCurrentDirectory() ).Parent.Parent.FullName;
DocumentClient client = null;
public void ReadAllData( DocumentClient client )
{
//reading document code
}
public void ReadConfigAsync()
{
client = new DocumentClient( new Uri( CosmosEndpoint ), EmulatorKey,
new ConnectionPolicy
{
ConnectionMode = ConnectionMode.Direct,
ConnectionProtocol = Protocol.Tcp
} );
}
public void CreateDatabase()
{// create db code
}
private void DeleteDatabase()
{
// delete db code
}
public CosmosDataFixture()
{
ReadConfigAsync();
CreateDatabase();
ReadAllData( client );
}
public void Dispose()
{
DeleteDatabase();
}
}
public class CosmosDataTests : IClassFixture<CosmosDataFixture>
{
CosmosDataFixture fixture;
public CosmosDataTests( CosmosDataFixture fixture )
{
this.fixture = fixture;
}
[Fact]
public async Task CheckDatabaseandCollectionCreation()
{
List<string> collectionName = new List<string>();
var uri = UriFactory.CreateDatabaseUri(DatabaseId);// don't get DatabaseId or client says does not exist in current context
var collections = await client.ReadDocumentCollectionFeedAsync( uri );
foreach( var collection in collections )
{
collectionName.Add( collection.Id);
}
}
That's what [SetUp] and [TearDown] are for in NUnit. They are run right before and right after each test case, respectively. In xUnit you would usually implement a default constructor and IDisposable.
For example:
public TestClass()
{
CreateDb();
UploadData();
}
public void Dispose()
{
DeleteDb()
}
[Fact]
public void UnitCase1()
{
...//My set of operation to test this case
...//Assert
}
[Fact]
public void UnitCase2()
{
...//My set of operation to test this case
...//Assert
}
As other people have pointed out, such tests are in mainstream parlance not unit tests, but rather integration tests. xUnit.net is a fine framework for those kinds of tests, though, so apart from the semantic distinction, it makes little technical difference.
Apart from setting up the database in the test class' constructor and tearing it down in Dispose, as outlined by Eric Schaefer, you can also use xUnit.net's BeforeAfterTestAttribute. You'll then override Before to set up the database, and override After to tear it down:
public class UseDatabaseAttribute : BeforeAfterTestAttribute
{
public override void Before(MethodInfo methodUnderTest)
{
CreateDb();
UploadData();
base.Before(methodUnderTest);
}
public override void After(MethodInfo methodUnderTest)
{
base.After(methodUnderTest);
DeleteDb();
}
}
You can then annotate either each test method, or the entire test class with the attribute. I usually just annotate the class:
[UseDatabase]
public class DbTests
{
// Tests go here...
}
Since tests that use a database interact with a shared resource (the database), they can't easily run in parallel. By default, xUnit.net runs tests in parallel, so you may want to disable that. You can do it by adding an xunit.runner.json file:
{
"$schema": "https://xunit.net/schema/current/xunit.runner.schema.json",
"parallelizeTestCollections": false
}
Finally, at least if you're using SQL Server, connection pooling will prevent you from deleting the database. You can either turn off connection pooling for your tests, or forcibly close other connections before teardown.
In my experience in Testing, I see 2 points here:
1-If you are checking that the data from the DB to another point in the program is being transmited correctly, that is Integration Testing, and it should be out of scope in the Unit Testing Plan, make sure that the responsabilities of a Unit Tester are clear where you work as there are some companies which avoid Integration Testing levels by assuming that if Functional Testing is 'OK', integrations should be too.
2- You mention at the end
It is important for me to delete what I have created as the test cases has certain cases which will fail if Database is not created when the tests are executed
but
I would like to create once and populate once and then delete db once all test case has been executed.
If I understand correctly, you need to do it for each Test Case as not all test cases are checking the same scenario, so it looks like those statements are the real problem here.
To answer your question, as it seems like you want to automate the process with minimum maintenance for the next releases, and I also know how the work environment tend to corner you to do some stuff that shouldn't be, I could think of a Preconditions Function and a Postcondition one, where you do it once and that's it.
If that is not possible for whatever reason, try to create another Test Case at the beginning (like Test Case 0) where you create and populate the DB (if apply, or separate it if needed) and another one at the end where you delete it.
I'm not familiar with the framework you are using, but I have a lot of experience in Testing, opening test levels and automating tasks, and hope that my answer could be of some help.

SpecFlow, Selenium, NUnit, Parallelization: ChromeDriver Windows from two different NUnit Tests, keep having unexplained relation

I have a selenium-webdriver-di.cs file like this:
using TechTalk.SpecFlow;
using OpenQA.Selenium.Chrome;
using OpenQA.Selenium;
using BoDi;
using System;
using System.IO;
using System.Text;
using System.Collections.Generic;
public class WebDriverHooks
{
private readonly IObjectContainer container;
private static Dictionary<string, ChromeDriver> drivers = new Dictionary<string, ChromeDriver>();
private ScenarioContext _scenarioContext;
private FeatureContext _featureContext;
public WebDriverHooks(IObjectContainer container, ScenarioContext scenarioContext, FeatureContext featureContext)
{
this.container = container;
_scenarioContext = scenarioContext;
_featureContext = featureContext;
}
[BeforeFeature]
public static void CreateWebDriver(FeatureContext featureContext)
{
Console.WriteLine("BeforeFeature");
var chromeOptions = new ChromeOptions();
chromeOptions.AddArguments("--window-size=1920,1080");
drivers[featureContext.FeatureInfo.Title] = new ChromeDriver(chromeOptions);
}
[BeforeScenario]
public void InjectWebDriver(FeatureContext featureContext)
{
if (!featureContext.ContainsKey("driver"))
{
featureContext.Add("driver", drivers[featureContext.FeatureInfo.Title]);
}
}
[AfterFeature]
public static void DeleteWebDriver(FeatureContext featureContext)
{
((IWebDriver)featureContext["driver"]).Close();
((IWebDriver)featureContext["driver"]).Quit();
}
And then in each of my .cs files that contains the step definitions, I have constructors like this:
using System;
using TechTalk.SpecFlow;
using NUnit.Framework;
using OpenQA.Selenium;
using System.Collections.Generic;
using PfizerWorld2019.CommunityCreationTestAutomation.SeleniumUtils;
using System.Threading;
using System.IO;
namespace PfizerWorld2019
{
[Binding]
public class SharePointListAssets
{
private readonly IWebDriver driver;
public SharePointListAssets(FeatureContext featureContext)
{
this.driver = (IWebDriver)featureContext["driver"];
}
}
}
And then I'm using the driver variable in all the functions of the class. Lastly I have a file that I called Assembly.cs, where I put this for NUnit fixture level parallelization:
using NUnit.Framework;
[assembly: Parallelizable(ParallelScope.Fixtures)]
Which in SpecFlow's terms means parallelization on the Feature level(1 .feature file = 1 Nunit Test = 1 Nunit Fixture)
If I run my tests serially, they work fine.
But if I run 2 tests in parallel, any two tests, always something funny happens. For example: the first Chromedriver window tries to click an element and it clicks it if and only when the second Chromedriver window (that is running a different test) renders the exact same part of the website. But it sends the click to the correct window (the first one).
I have tried:
To use the IObjectContainer interface and then do containers[featureContext.FeatureInfo.Title].RegisterInstanceAs<IWebDriver>(drivers[featureContext.FeatureInfo.Title]) in the InjectWebDriver function
To use Thread.CurrentThread.ToString() instead of featureContext.FeatureInfo.Title for indexing
To do featureContext.Add(featureContext.FeatureInfo.Title + "driver", new ChromeDriver(chromeOptions) instead of drivers[featureContext.FeatureInfo.Title] = new ChromeDriver(chromeOptions); in the CreateWebDriver function.
I just do not understand what allows this "sharing". Since FeatureContext is used for everything related to driver spawning and destruction, how can two chromedrivers talk with each other?
Update: I tried the driver initialization & sharing like this:
[BeforeFeature]
public static void CreateWebDriver(FeatureContext featureContext)
{
var chromeOptions = new ChromeOptions();
chromeOptions.AddArguments("--window-size=1920,1080");
chromeOptions.AddArguments("--user-data-dir=C:/ChromeProfiles/Profile" + uniqueIndex);
WebdriverSafeSharing.setWebDriver(TestContext.CurrentContext.WorkerId, new ChromeDriver(chromeOptions));
}
and I made a webdriver-safe-sharing.cs file like this:
class WebdriverSafeSharing
{
private static Dictionary<string, IWebDriver> webdrivers = new Dictionary<string, IWebDriver>();
public static void setWebDriver(string driver_identification, IWebDriver driver)
{
webdrivers[driver_identification] = driver;
}
public static IWebDriver getWebDriver(string driver_identification)
{
return webdrivers[driver_identification];
}
}
and then in each step definition function, I'm just calling WebdriverSafeSharing.getWebDriver(TestContext.CurrentContext.WorkerId) without any involvement of the FeatureContext. And I'm still getting the same issue. Notice how I'm doing chromeOptions.AddArguments("--user-data-dir=C:/ChromeProfiles/Profile" + uniqueIndex); because I'm also starting to not trust that chromedriver itself is thread safe. But even that did not help.
Update 2: It tried an even more paranoid webdriver-safe-sharing.cs class:
class WebdriverSafeSharing
{
private static readonly Dictionary<string, ThreadLocal<IWebDriver>> webdrivers = new Dictionary<string, ThreadLocal<IWebDriver>>();
private static int port = 7000;
public static void setWebDriver(string driver_identification)
{
lock (webdrivers)
{
ChromeDriverService service = ChromeDriverService.CreateDefaultService();
service.Port = port;
var chromeOptions = new ChromeOptions();
chromeOptions.AddArguments("--window-size=1920,1080");
ThreadLocal<IWebDriver> driver =
new ThreadLocal<IWebDriver>(() =>
{
return new ChromeDriver(service, chromeOptions);
});
webdrivers[driver_identification] = driver;
port += 1;
Thread.Sleep(1000);
}
}
public static IWebDriver getWebDriver(string driver_identification)
{
return webdrivers[driver_identification].Value;
}
It has a lock, a Threadlocal and unique port. It still does not work. Exact same issues.
Update 3: If I run two separate Visual Studio instances and I run 1 test for each, it works. Or by having two identical projects run side by side
and then selecting to run the tests in parallel:
The problem appears to be that you are stashing the IWebDriver object in the FeatureContext. The FeatureContext is created and reused for each scenario in a feature. While on the surface it appears safe for running tests in parallel using NUnit (which does not run scenarios in the same feature in parallel), my hunch is that this is not as safe as you think.
Instead, initialize and destroy the IWebDriver object with each scenario, rather than feature. The ScenarioContext should be thread safe, since it is created once for each scenario, and is only used for one scenario. I would recommend using dependency injection instead:
[Binding]
public class WebDriverHooks
{
private readonly IObjectContainer container;
public WebDriverHooks(IObjectContainer container)
{
this.container = container;
}
[BeforeScenario]
public void CreateWebDriver()
{
var driver = // Initialize your web driver here
container.RegisterInstanceAs<IWebDriver>(driver);
}
[AfterScenario]
public void DestroyWebDriver()
{
var driver = container.Resolve<IWebDriver>();
// Take screenshot if you want...
// Dispose of the web driver
driver.Dispose();
}
}
Then add a constructor argument to your step definition classes to pass the IWebDriver:
[Binding]
public class FooSteps
{
private readonly IWebDriver driver;
public FooSteps(IWebDriver driver)
{
this.driver = driver;
}
// step definitions...
}
The reason was that all the selenium API actions were wrapped in static methods written by me. It was a class in a file that I wrote many weeks ago for code re-usability. However not being used into working with parallel programming in C#, I was honestly not aware anymore that these methods were declared static. I am now running 20 parallel workers on Selenium Grid.
However I'm placing here some important notes to be aware of, if one faces parallelization issues with NUnit, SpecFlow and Selenium
The initialization of the WebDriver must be done in the [BeforeFeauture]-bound method if the goal is feature-level and not scenario-level parallelization.
The initialization of the WebDriver must be thread safe. What I did is that I used a static Dictionary that is indexed by the FeatureContext.FeatureInfo.Title that contains the WebDrivers
chromedriver is thread safe. There is no need unique data-dir folders or unique ports or unique chromedriver file names. The --headless and --no-sandbox arguments that one might be interested in have not caused me any issues with parallelization (either with Selenium Grid or simple single multi-core machine parallelization). Basically don't blame the chromedriver.
For injecting the webdriver, do use the IObjectContainer interface in the [BeforeScenario]-bound method. It is great and safe.
Dispose the driver in the [AfterFeature]-bound method with driver.Dispose() so that you don't have zombie processes. This helped me with Selenium Grid, because when I was using driver.Close() and driver.Quit(), the nodes would not kill the processes after the latter were done.
For NUnit with [assembly: Parallelizable(ParallelScope.Fixtures)] enabled, all the scenarios within a .feature file run within the same FeatureContext and therefore the scenarios are FeatureContext-safe. This means that you can trust the FeatureContext for sharing data between scenarios within the same .feature file.
The [BeforeFeature] hook is being called only once per .feature file (as one would logically assume). So if you have x .feature files, the hook will be called x times during a test run.
As the Specflow Documentation says, for feature-level parallelization either NUnit or xUnit should be used as the test framework. However the NUnit has the benefit that it provides out of the box ordering for the tests, by alphabetical sort order. This is particularly useful if you want to have two scenarios run in sequence within the same feature file. i.e. putting a number in front of each scenario title will ensure their order during execution. xUnit does not support this natively and it looked like a difficult goal to achieve from searching around.
Specflow is more "friendly" in terms of parallelization, for scenario-level parallelization. This is why the Specflow+ Runner test framework by SpecFlow runs scenario levels in parallel. it looks like the whole philosophy of SpecFlow (I wouldn't say BDD yet) is to have independent scenarios. This doesn't mean of course that you cannot have very nice feature-level parallelization by using the other test frameworks. Just putting it out there as a heads up for someone reading this while drafting a strategy for writing feature files.

Selenium grid mixing up tests between browsers

Lets say i have to open google in 2 browsers, search Test1 in 1st, search Test 2 in 2nd. It opens 2 browsers, writes Test1Test2 in one browser and pass the test. how do i get around it?
it works well if i declare driver in every test function, but this cannot be done if i want to use RemoteWebDriver later to run it on different machines.(because it then uses only one node and doesn't do anything on other) Heard about using non static browser as well, but not sure how to use it, and not sure if that is solution of the problem?
namespace ParallelGrid
{
[TestFixture]
[Parallelizable]
public class ParallelGrid1
{
[ThreadStatic]
public static IWebDriver driver;
[SetUp]
public void Setup()
{
ChromeOptions options = new ChromeOptions();
driver = new ChromeDriver();
// driver = new RemoteWebDriver(new Uri("http://xxx.xxx.xx.xxx:4444/wd/hub"), options.ToCapabilities(), TimeSpan.FromSeconds(600));//hub id goes here
}
[Test]
[Parallelizable]
public void Test1()
{
driver.Navigate().GoToUrl("https://www.google.com");
driver.FindElement(By.Name("q")).Click();
driver.FindElement(By.Name("q")).SendKeys("Test");
}
[Test]
[Parallelizable]
public void Test2()
{
driver.Navigate().GoToUrl("https://www.google.com");
driver.FindElement(By.Name("q")).Click();
driver.FindElement(By.Name("q")).SendKeys("Grid");
}
}
}
For parallelization to work with NUnit and C# you can only parallelize on Test class at a time. So you have to have one test per class.
https://github.com/nunit/nunit/issues/2252

How to generate tests based on data in nunit framework using C#

So i have this code:
[TestFixture]
[Category("MyTestSet")]
public class MyTests
{
[Test]
public void TestCase12()
{
ExecuteTestCase(12);
}
[Test]
public void TestCase13()
{
ExecuteTestCase(13);
}
[Test]
public void TestCase14()
{
ExecuteTestCase(14);
}
}
The ExecuteTestCase gets test parameters from my web server and executes the test case with these settings.
Each time i add a new test case parameters on my web server i need to add a new test in my C# code and pass the ID of test case parameters i have in my web server database and compile my code.
Is there any way to do it automatically? Like say, C# gets from my server ID's of all test case parameters and creates tests for them on the fly?
What is important, test cases change frequently. I was thinking about running all test cases in one test case on a loop, but than i'd be unable to run my test cases separately for example in Nunit IDE.
So my question is: how to run multiple test cases depending on data i receive on run time.
You can use TestCaseSourceattribute in order to get parameters from web service and have your test cases auto generated
[TestFixture]
[Category("MyTestSet")]
public class MyTests
{
[Test, TestCaseSource(nameof(GetTestParameters))]
public void TestCase(int parameter)
{
ExecuteTestCase(parameter);
}
static int[] GetTestParameters()
{
//call web service and get parameters
return new[] { 1, 2, 3 };
}
}
documentation

How to distinguish between testsuite and testcase on the report

Using Selenium C# web driver with NUnit for automation. I am generating Allure report using command line and my report gets fantastically created but I need help on the following issue:
I have the following structure using Page object model (2 Test and 1 Page). Now when I see the report it shows at the top Test run (2 testsuites, 2 testcases) and each testcase is a testsuite. I want it to say 1 testsuites, 2 testcases. How do I do that?
namespace ApplicationName.TestCases
{
[TestFixture]
class VerifyCreateOrder
{
IWebDriver driver;
[SetUp]
public void Initialize()
{
driver = new FirefoxDriver();
}
[TestCase]
public void doCreateOrder()
{
LoginPage loginPage = new LoginPage();
//some Assertion
}
}
}
namespace ApplicationName.TestCases
{
[TestFixture]
class SearchOrder
{
IWebDriver driver;
[SetUp]
public void Initialize()
{
driver = new FirefoxDriver();
}
[TestCase]
public void doSearchOrder()
{
LoginPage loginPage = new LoginPage();
//some Assertion
}
}
}
The below is my LoginPage Page object:
namespace ApplicationName.Pages
{
class LoginPage
{
public void doLogin(IWebDriver driver, String username, String password)
{
driver.Navigate().GoToUrl("http://www.myxyzsite.com");
driver.FindElement(By.Id("xyz")).SendKeys(username);
driver.FindElement(By.Id("xyz")).SendKeys(password);
driver.FindElement(By.Id("xyz")).Click();
}
}
}
I read about the NUnit suite attribute at http://www.nunit.org/index.php?p=suite&r=2.5.5 and created a c# class with enumerator as described but how do i call it/wire it? What changes do I need to make for my test classes?
namespace NUnit.Tests
{
public class MyTestSuite
{
[Suite]
public static IEnumerable Suite
{
get
{
ArrayList suite = new ArrayList();
suite.Add(new VerifyCreateOrder());
suite.Add(new SearchOrder());
return suite;
}
}
}
}
I want it to say 1 testsuites, 2 testcases. How do I do that?
Without adding a Suite or similar, you could put both Test cases into the same TestFixture, since that's what the testsuite output is built from. You may be able to do that using a partial class, or you can simply conflate the two classes. However, your Suite solution is a better choice.
What changes do I need to make for my test classes?
Call NUnit with the option /fixture:NUnit.Tests.MyTestSuite.
Note that all of this has changed with NUnit 3 and the Suite attribute is gone. I can't see any way to do what you want in NUnit 3 short of reorganizing your test cases.
If it's very important to merge tests into suites, you can use XSLT. The NUnit test result schema is quite straightforward and easy to manipulate using XSLT.

Categories

Resources