How to invoke unit test class in reflection c# - c#

I have a class which is decorated with [TestFixture] attribute and this class contains methods that are decorated with [Test] attribute, each method signature is
public void MethodName([ValueSource("TestConfigurations")] TestConfiguration tConf)
also there are set up and tear down methods
[TestFixtureSetUp]
public void TestFixtureSetUp()
{
}
[SetUp]
public void TestSetUp() { }
[TearDown]
public void TestTearDown()
{
}
[TestFixtureTearDown]
public void TestFixtureTearDown()
{
}
how can I run this unit test class via reflection in c#?
Thank you in advanced

Something like:
public static class RunUnitTestsClass<TUnitTests> where TUnitTests : new()
{
private static IEnumerable<MethodInfo> WithAttribute<TAttribute>()
{
return typeof(TUnitTests).GetMethods().Where(method => method.GetCustomAttributes(typeof(TAttribute), true).Any());
}
private static void RunWithAttribute<TAttribute>()
{
var unitTests = new TUnitTests();
foreach (var method in WithAttribute<TAttribute>())
method.Invoke(unitTests, new object[0]);
}
public static void RunTestFixtureSetup()
{
RunWithAttribute<TestFixtureSetUp>();
}
// same for the rest of them
public static void RunTests(TestConfiguration tConf)
{
var unitTests = new TUnitTests();
foreach (var method in WithAttribute<Test>())
method.Invoke(unitTests, new []{tConf});
}
}

Related

How to create an automation test for a workflow

I am working on a workflow project that has 19 scenarios for testing the whole system and 34 steps.
So, my question is, how can I create an automation test for it?
My current approach is:
Create an integrated test per each scenario, and then create the main system test to run all integrated tests.
using Microsoft.VisualStudio.TestTools.UnitTesting;
using System;
namespace Project1
{
// Unit tests
public class UnitTest_step1
{
public void RunTest() { }
}
public class UnitTest_step2
{
public void RunTest() { }
}
public class UnitTest_step3
{
public void RunTest() { }
}
public class UnitTest_step4
{
public void RunTest() { }
}
// End of unit tests
public class IntegrationTests
{
public void IntegrationTest1()
{
UnitTest_step1.RunTest();
UnitTest_step2.RunTest();
UnitTest_step4.RunTest();
}
public void IntegrationTest2()
{
UnitTest_step1.RunTest();
UnitTest_step2.RunTest();
UnitTest_step3.RunTest();
UnitTest_step4.RunTest();
}
public void IntegrationTest3()
{
UnitTest_step1.RunTest();
UnitTest_step4.RunTest();
}
}
[TestClass]
public class SystemTests
{
[TestMethod]
public void Scenario1()
{
IntegrationTests.IntegrationTest1()
}
[TestMethod]
public void Scenario2()
{
IntegrationTests.IntegrationTest2();
}
[TestMethod]
public void Scenario3()
{
IntegrationTests.IntegrationTest3();
}
[TestMethod]
public void ScenarioN()
{
IntegrationTests.IntegrationTestN();
}
}
}
Best Regards.
Well, in my opinion, the information provided in your question is very abstract and the question is a bit too broad.
The answer depends on how your workflow engine is implemented and what are your system requirements.
Requirements and implementation details are what defines your approach to testing.
I would start with clarifying what kind of steps you have, is there any data context is passed,
what side effects these steps produce (writes data to database, sends events, call other system APIs, etc.),
do steps depend on each other and so on.
Another question is how do you need to assert the results, after each step or after scenario?
The system should be testable and normally, each step should be covered with unit tests.
So, suggested hypothetical approach is to cover each step with isolated unit tests
and scenarios with integration tests.
I came up with a simple example just to illustrate one of the general approaches.
For simplicity, I assume that steps have little or no data context and can be reordered.
namespace Workflow.Test
{
using Microsoft.VisualStudio.TestTools.UnitTesting;
using System;
using System.Collections.Generic;
[TestClass]
public class SystemTests
{
[TestMethod]
public void Scenario1()
{
new Workflow().Run(new Scenario1());
}
[TestMethod]
public void Scenario2()
{
new Workflow().Run(new Scenario2());
}
// The advantage of explicit steps declaration is test readability.
// Declarative approach also enables the further possibility of test generation!
[TestMethod]
public void MoreExplicitAndDeclarative()
{
new Workflow().Run(new List<Type>
{
typeof(Step1),
typeof(Step2),
typeof(Step3),
});
}
// Step instantiation may be needed if you want to parameterize some steps.
[TestMethod]
[DataRow("Custom step")]
[DataRow("Another step")]
public void MoreExplicitParameterizedScenario(string customName)
{
new Workflow().Run(new List<IRunnable>{
new Step1(),
new Step3(customName)
});
}
}
[TestClass]
public class StepsUnitTests
{
[TestMethod]
public void Step1DoesWhatWeWant()
{
// Mock dependencies
new Step1().Run();
// Assert results
}
}
#region Workflow Engine Example
public interface IRunnable
{
void Run();
}
public class Workflow
{
public void Run(Scenario scenario)
{
Run(CreateSteps(scenario.GetStepTypes()));
}
public void Run(IEnumerable<Type> stepTypes)
{
Run(CreateSteps(stepTypes));
}
public void Run(List<IRunnable> steps)
{
steps.ForEach(step => step.Run());
}
private List<IRunnable> CreateSteps(IEnumerable<Type> stepTypes)
{
var steps = new List<IRunnable>();
foreach (var stepType in stepTypes)
{
steps.Add(CreateStep(stepType));
}
return steps;
}
private IRunnable CreateStep(Type stepType)
=> (IRunnable) Activator.CreateInstance(stepType);
}
#endregion
// Step structure can differ according to system requirements.
// We may add data context and link steps into pipeline if needed.
#region Steps
public abstract class Step : IRunnable
{
private readonly string _stepName;
protected Step(string name)
{
_stepName = name;
}
public void Run()
{
Console.WriteLine($"{_stepName} in action.");
Invoke();
}
public abstract void Invoke();
}
public class Step1 : Step
{
public Step1() : base(nameof(Step1))
{
}
public override void Invoke()
{
// do work
Console.WriteLine($"Step1 invoked.");
}
}
public class Step2 : Step
{
public Step2() : base(nameof(Step2))
{
}
public override void Invoke()
{
// do work
Console.WriteLine($"Step2 invoked.");
}
}
public class Step3 : Step
{
public Step3(string customName) : base(customName)
{
}
public Step3() : this(nameof(Step3))
{
}
public override void Invoke()
{
// do work
Console.WriteLine($"Step3 invoked.");
}
}
public class Step4 : Step
{
public Step4() : base(nameof(Step4))
{
}
public override void Invoke()
{
// do work
Console.WriteLine($"Step4 invoked.");
}
}
#endregion
// Scenarios should be as declarative as possible.
// Let's say the scenario is just specification of what steps (step Type)
// and in what order should be executed (List as a non-unique ordered collection).
#region Scenarios
public abstract class Scenario
{
public abstract List<Type> GetStepTypes();
}
public class Scenario1 : Scenario
{
public override List<Type> GetStepTypes()
=> new List<Type>
{
typeof(Step1),
typeof(Step2),
typeof(Step3)
};
}
public class Scenario2 : Scenario
{
public override List<Type> GetStepTypes()
=> new List<Type>
{
typeof(Step1),
typeof(Step2),
typeof(Step4)
};
}
#endregion
}

Sharing the dirver object initialized in base class to that of page classes

In the POM model, we ideally tend to have the driver object being initialized in base class. And in the page classed we pass this driver object. But the problem is to avoid passing this object as well and the tests should continue to work in parallel too in XUNit framework. Below is the structure
public class BaseClass:IDisposable
{
public IWebDriver Driver{get;set;}
public BaseClass()
{
if(Driver == null)
{
Driver = new ChromeDriver();
}
}
}
public class Page1:BaseClass
{
public void method1()
{
this.Driver.Navigate.GoToUrl("http://www.google.com")
}
}
public class Page2:BaseClass
{
public void method2()
{
this.Driver.Navigate.GoToUrl("http://www.stackoverflow.com")
}
}
public class TestClass
{
[Fact]
public void Test1()
{
new Page1().method1();
new Page2().method2();
}
}
Now in the above structure if the test executes two instances of the driver object will be created because of OOPS. If we need to avoid this we can the Driver object as static and reinitialize it if the object is null. But this will again fail when we run multiple tests in parallel. Any suggession? Thing I am trying to achieve is that full encapsulation where the Test class should not have any access to Selenium objects. These objects should be only accessible in Page class or its Operation class if we have any.
We need to ensure we create a driver singleton and its threadsafe to run parallely
[TestClass]
public class UnitTest1 : TestBase
{
[TestMethod]
public void TestMethod1()
{
new Page1().method1();
new Page2().method2();
Driver.Testcleanup();
}
[TestMethod]
public void TestMethod2()
{
new Page1().method1();
new Page2().method2();
Driver.Testcleanup();
}
public class Page1
{
public void method1()
{
Driver.Instance.Navigate().GoToUrl("http://www.google.com");
}
}
public class Page2
{
public void method2()
{
Driver.Instance.Navigate().GoToUrl("http://www.google.com");
}
}
}
Driver Class will handle the initialization of the singleton and cleanup
public sealed class Driver
{
[ThreadStatic]
public static IWebDriver driver = null;
public static IWebDriver Instance
{
get
{
if (driver == null)
{
driver = new ChromeDriver();
}
return driver;
}
}
public static void Testcleanup()
{
driver.Quit();
driver = null;
}
}

How to call static method of static generic class with C# Reflection?

I have many classes with these implementations:
internal static class WindowsServiceConfiguration<T, Y> where T : WindowsServiceJobContainer<Y>, new() where Y : IJob, new()
{
internal static void Create()
{
}
}
public class WindowsServiceJobContainer<T> : IWindowsService where T : IJob, new()
{
private T Job { get; } = new T();
private IJobExecutionContext ExecutionContext { get; }
public void Start()
{
}
public void Install()
{
}
public void Pause()
{
}
public void Resume()
{
}
public void Stop()
{
}
public void UnInstall()
{
}
}
public interface IWindowsService
{
void Start();
void Stop();
void Install();
void UnInstall();
void Pause();
void Resume();
}
public class SyncMarketCommisionsJob : IJob
{
public void Execute(IJobExecutionContext context)
{
}
}
public interface IJob
{
void Execute(IJobExecutionContext context);
}
I would like to call Create() method of WindowsServiceConfiguration static class by reflection as below:
WindowsServiceConfiguration<WindowsServiceJobContainer<SyncMarketCommisionsJob>, SyncMarketCommisionsJob>.Create();
and I don't know how to do that by using Activator or something like that in order to call Create method in my C# code?
best regards.
Something like this ought to work:
// Get the type info for the open type
Type openGeneric = typeof(WindowsServiceConfiguration<,>);
// Make a type for a specific value of T
Type closedGeneric = openGeneric.MakeGenericType(typeof(WindowsServiceJobContainer<SyncMarketCommisionsJob>), typeof(SyncMarketCommisionsJob));
// Find the desired method
MethodInfo method = closedGeneric.GetMethod("Create", BindingFlags.Static | BindingFlags.NonPublic | BindingFlags.InvokeMethod);
// Invoke the static method
method.Invoke(null, new object[0]);

Can setup and teardown be called for each of the Test Cases?

[Setup]
public void RunBeforeAnyTest()
{
}
[TearDown]
public void RunAfterEveryTest()
{
}
[Test]
public void Test1()
{
}
[TestCase("case1")]
[Testcase("case2")]
public void Test2()
{
}
In above example, the Setup and TearDown gets executed before and after Test1 and Test2. But I want it to execute before and after individual test case in Test2. Is it possible with nunit framework? How can I achieve this?
After correcting for some typos:
[TestFixture]
public class Class1
{
[SetUp]
public void RunBeforeAnyTest()
{
Console.WriteLine("RunBeforeAnyTest");
}
[TearDown]
public void RunAfterEveryTest()
{
Console.WriteLine("RunAfterEveryTest");
}
[Test]
public void Test1()
{
Console.WriteLine("Test1()");
}
[TestCase("case1")]
[TestCase("case2")]
public void Test2(string param)
{
Console.WriteLine($"Test2({param})");
}
}
The output is:
RunBeforeAnyTest
Test1()
RunAfterEveryTest
RunBeforeAnyTest
Test2(case1)
RunAfterEveryTest
RunBeforeAnyTest
Test2(case2)
RunAfterEveryTest
Isn't that what you hoped?

NUnit equivalent of JUnit's Rule

Is there an equivalent of JUnit's Rule in C# ? I mean a way to avoid the repetition the same [SetUp] and [TearDown] lines in several different tests. Instead of:
[SetUp]
public void SetUp()
{
myServer.connect();
}
[TearDown]
public void TearDown()
{
myServer.disconnect();
}
... put the logic in a rule that can be declared as field in several tests:
public MyRule extends ExternalResource {
#Override
protected void before() throws Throwable
{
myServer.connect();
};
#Override
protected void after()
{
myServer.disconnect();
};
};
and then
class TestClass
{
#Rule MyRule = new MyRule();
...
}
You could implement your own TestActionAttribute class that runs your before- and after-test code. If you intend to perform the same action before and after every test, you can define your custom attribute at the class declaration.
e.g.:
[MyRule] // your custom attribute - applied to all tests
public class ClassTest
{
[Test]
public void MyTest()
{
// ...
}
}
[AttributeUsage(AttributeTargets.Class | AttributeTargets.Method)]
public class MyRuleAttribute : TestActionAttribute
{
public override void BeforeTest(TestDetails testDetails)
{
// connect
}
public override void AfterTest(TestDetails testDetails)
{
// disconnect
}
}
NUnit works well with tests inheritance, so you can create a base test class with SetUp and TearDown methods:
[TestFixture]
public class BaseTest
{
[SetUp]
public virtual void SetUpTest()
{
//...
}
[TearDown]
public virtual void TearDownTest()
{
//...
}
}
Then create a regular test class that inherits from the base class and it will reuse the SetUp and TearDown implementations:
[TestFixture]
public class RegularTest : BaseTest
{
[Test]
public virtual void MyTest()
{
//...
}
}

Categories

Resources