I'm building a selenium test framework based on .Net Core and the team decided to go with xUnit. All's well and good everything has been going ok but for a while now, we've been trying to replicate the functionality of Java TestNG listeners without much luck.
I've been digging around the xunit git repo and found a few instances where some interfaces such ITestListener have been used. After digging deeper, I found that these listeners are from a package called TestDriven.Framework and I wanted to know exactly how would I use a test listener created using those interfaces?
So far this is my simple test listener that should write something when the test fails:
public class Listener
{
readonly int totalTests;
public Listener(ITestListener listener, int totalTests)
{
this.totalTests = totalTests;
TestListener = listener;
TestRunState = TestRunState.NoTests;
}
public ITestListener TestListener { get; private set; }
public TestRunState TestRunState { get; set; }
public void onTestFail(ITestFailed args)
{
Console.WriteLine(args.Messages);
}
}
Now, I know you can do this inside a tear down hook but remember, this is just a simple example and what I have in mind is something more complex. So to be precise, where/how exactly would I register the test to use this listener? In Java TestNg I would have #Listeners but in C# I'm not too sure.
Edit 1 so the example worked and managed to add it to my own project structure but when I try to use this
class TestPassed : TestResultMessage, ITestPassed
{
/// <summary>
/// Initializes a new instance of the <see cref="TestPassed"/> class.
/// </summary>
public TestPassed(ITest test, decimal executionTime, string output)
: base(test, executionTime, output) {
Console.WriteLine("Execution time was an awesome " + executionTime);
}
}
I'm having trouble registering this one, or if i'm even registering it right. As far as the examples go, I have found the actual message sinks but also found the actual test status data which i'm not exactly sure how to use.
I haven't worked with TestNG, but I did some quick reading and I think I see what you're after.
To demonstrate, I've created a very basic proof-of-concept implementation of the xUnit [IMessageSink] interface (https://github.com/xunit/abstractions.xunit/blob/master/src/xunit.abstractions/Messages/BaseInterfaces/IMessageSink.cs).
public class MyMessageSink : IMessageSink
{
public bool OnMessage(IMessageSinkMessage message)
{
// Do what you want to in response to events here.
//
// Each event has a corresponding implementation of IMessageSinkMessage.
// See examples here: https://github.com/xunit/abstractions.xunit/tree/master/src/xunit.abstractions/Messages
if (message is ITestPassed)
{
// Beware that this message won't actually appear in the Visual Studio Test Output console.
// It's just here as an example. You can set a breakpoint to see that the line is hit.
Console.WriteLine("Execution time was an awesome " + ((ITestPassed)message).ExecutionTime);
}
// Return `false` if you want to interrupt test execution.
return true;
}
}
The sink is then registered via an IRunnerReporter:
public class MyRunnerReporter : IRunnerReporter
{
public string Description => "My custom runner reporter";
// Hard-coding `true` means this reporter will always be enabled.
//
// You can also implement logic to conditional enable/disable the reporter.
// Most reporters based this decision on an environment variable.
// Eg: https://github.com/xunit/xunit/blob/cbf28f6d911747fc2bcd64b6f57663aecac91a4c/src/xunit.runner.reporters/TeamCityReporter.cs#L11
public bool IsEnvironmentallyEnabled => true;
public string RunnerSwitch => "mycustomrunnerreporter";
public IMessageSink CreateMessageHandler(IRunnerLogger logger)
{
return new MyMessageSink();
}
}
To use my example code, just copy the classes into your test project (you'll also need to add a reference to the xunit.runner.utility NuGet package). The xUnit framework will automagically discover the IRunnerReporter--no need to explicitly register anything.
If this seems like it's headed in the right direction, you can find a lot more info in the xUnit source code. All of the interfaces involved are well-documented. There are a few existing implementations in the xunit.runner.reporters namespace. AssemblyRunner.cs also demonstrates one possible method for dispatching the different event types to individual handlers.
Edit 1
I've updated the implementation of MyMessageSink (above) to demonstrate how you might listen for an ITestPassed message. I also updated the link embedded in that code snippet--the previous link was to implementations, but we should really use these abstractions.
The if (message is IMessageType) pattern is pretty crude, and won't scale well if you want to listen for many different message types. Since I don't know your needs, I just went with the simplest thing that could possibly work--hopefully it's enough that you can improve/extend it to fit your needs.
Related
I am having troubles when testing a controller, because there are some lines at my Startup that are null when testing, I want to add a condition for run this lines only if it's not testing.
// Desired method that retrieves if testing
if (!this.isTesting())
{
SwaggerConfig.ConfigureServices(services, this.AuthConfiguration, this.ApiMetadata.Version);
}
The correct answer (although of no help): It should not be able to tell so. The application should to everything it does unaware if it is in productino or test.
However to test the application in a simpler setting, you can use fake modules or mock-up modules that are loaded instead of the heavy-weight production modules.
But in order to use that, you have to refactor your solution and use injection for instance.
Some links I found:
Designing with interfaces
Mock Objects
Some more on Mock objects
It really depends on which framework you use for testing. It can be MSTest, NUnit or whatever.
Rule of thumb, is that your application should not know whether it is tested. It means everything should be configured before actual testing through injection of interfaces. Simple example of how tests should be done:
//this service in need of tests. You must test it's methods.
public class ProductionService: IProductionService
{
private readonly IImSomeDependency _dep;
public ImTested(IImSomeDependency dep){ _dep = dep; }
public void PrintStr(string str)
{
Console.WriteLine(_dep.Format(str));
}
}
//this is stub dependency. It contains anything you need for particular test. Be it some data, some request, or just return NULL.
public class TestDependency : IImSomeDependency
{
public string Format(string str)
{
return "TEST:"+str;
}
}
//this is production, here you send SMS, Nuclear missle and everything else which cost you money and resources.
public class ProductionDependency : IImSomeDependency
{
public string Format(string str)
{
return "PROD:"+str;
}
}
When you run tests you configure system like so:
var service = new ProductionService(new TestDependency());
service.PrintStr("Hello world!");
When you run your production code you configure it like so:
var service = new ProductionService(new ProductionDependency());
service.PrintStr("Hello world!");
This way ProductionService is just doing his work, not knowing about what is inside it's dependencies and don't need "is it testing case №431" flag.
Please, do not use test environment flags inside code if possible.
UPDATE:
See #Mario_The_Spoon explanation for better understanding of dependency management.
I'm building a DLL in C# that I will be consuming with several different projects - so far, I know of a WPF application and a (binary) PowerShell module. Because the core business logic needs to be shared across multiple projects, I don't want the PowerShell module itself to contain the core logic. I'd just like to reference my primary library.
I'm struggling to figure out how to implement a clean logging solution in my core DLL that will be accessible via PowerShell's WriteVerbose() method. Without this, I can provide verbose output to PowerShell about PowerShell-specific things, but I can't provide any verbose output about "waiting for HTTP request" or other features that would be in the core DLL.
Here's a simple example of what I'm trying to do:
using System;
using System.Threading;
namespace CoreApp
{
public class AppObject
{
public AppObject() {}
public int DoStuffThatTakesForever()
{
// Assume logger is a logging object - could be an existing
// library like NLog, or I could write it myself
logger.Info("Doing step 1");
Thread.Sleep(5000);
logger.Info("Doing step 2");
Thread.Sleep(5000);
logger.Info("Doing step 3");
Random r = new Random();
r.Next(0, 10);
}
}
}
////////////////////////////////////////////////////////////
// Separate VS project that references the CoreApp project
using System.Management.Automation;
using CoreApp;
namespace CoreApp.PowerShell
{
[Cmdlet(VerbsCommon.Invoke, "ThingWithAppObject"]
[OutputType(typeof(Int32))]
public class InvokeThingWithAppObject : Cmdlet
{
[Parameter(Position = 0)]
public AppObject InputObject {get; set;}
protected override void ProcessRecord()
{
// Here I want to be able to send the logging phrases,
// "Doing step 1", "Doing step 2", etc., to PowerShell's
// verbose stream (probably using Cmdlet.WriteVerbose() )
int result = InputObject.DoStuffThatTakesForever();
WriteObject(result);
}
}
}
How can I provide verbose PowerShell verbose output without tightly binding the core library with the PowerShell module?
I'm definitely open to other solutions, but here's how I ended up solving it:
In the core library, I created an ILogger interface with methods for Info, Verbose, Warn, etc. I created a DefaultLogger class that implemented that logger (by writing everything to the attached debugger), and I gave this class a static singleton instance.
In each method that I wanted logged, I added an optional ILogger parameter, and added a line to use the default logger if necessary. The method definitions now look like this:
public int DoSomething(ILogger logger = null)
{
logger = logger ?? MyAppLogger.Singleton;
// Rest of the code
Random r = new Random();
return r.Next(0, 10);
}
I had to do this for each method because the PSCmdlet.WriteVerbose() method expects to be called from the currently running cmdlet. I couldn't create a persistent class variable to hold a logger object because each time the user ran a cmdlet, the PSCmdlet object (with the WriteVerbose method I need) would change.
Finally, I went back to the PowerShell consumer project. I implemented the ILogger class in my base cmdlet class:
public class MyCmdletBase : PSCmdlet, ILogger
{
public void Verbose(string message) => WriteVerbose(message);
public void Debug(string message) => WriteDebug(message);
// etc.
}
Now it's trivial to pass the current cmdlet as an ILogger instance when calling a method from the core library:
[Cmdlet(VerbsCommon.Invoke, "ThingWithAppObject"]
[OutputType(typeof(Int32))]
public class InvokeThingWithAppObject : MyCmdletBase
{
[Parameter(Mandatory = true, Position = 0)]
public AppObject InputObject {get; set;}
protected override void ProcessRecord()
{
int result = InputObject.DoSomething(this);
WriteObject(result);
}
}
In a different project, I'll need to write some kind of "log adapter" to implement the ILogger interface and write log entries to NLog (or whatever logging library I end up with).
The only other hiccup I ran into is that WriteVerbose(), WriteDebug(), etc. cannot be called from a different thread than the main thread the cmdlet is running on. This was a significant problem, since I'm making async Web requests, but after banging my head on the wall I decided to just block and run the Web requests synchronously instead. I'll probably end up implementing both a synchronous and an async version of each Web-based function in the core library.
This approach feels a bit dirty to me, but it works brilliantly.
I'm still trying to follow the path to TDD.
Let's say I have a SunSystemBroker triggered when a file is uploaded to a Shared Folder. This broker is designed to open this file, extract records from it, try to find associated payments in another systems and finally to call a workflow!
If I want to follow TDD to develop the IBroker.Process() method how shall i do?
Note: Broker are independent assemblies inheriting from IBroker and loaded by a console app (like plugins).
This console is in charge of triggering each broker!
public interface IFileTriggeredBroker : IBroker
{
FileSystemTrigger Trigger { get; }
void Process(string file);
}
public class SunSystemPaymentBroker : IFileTriggeredBroker
{
private readonly IDbDatasourceFactory _hrdbFactory;
private readonly IExcelDatasourceFactory _xlFactory;
private readonly IK2DatasourceFactory _k2Factory;
private ILog _log;
public void Process(string file)
{
(...)
// _xlFactory.Create(file) > Extract
// _hrdbFactory.Create() > Find
// Compare Records
// _k2Factory.Create > Start
}
}
Each method are tested individually.
Thank you
Seb
Given that you say each method:
_xlFactory.Create(file);
_hrdbFactory.Create();
// Compare Records
_k2Factory.Create();
is tested individually, there is very little logic to test within Process(file).
If you use something like Moq, you can check that the calls occur:
// Arrange
const string File = "file.xlsx";
var xlFactory = new Mock<IExcelDatasourceFactory>();
var hrbdFactory = new Mock<IDbDatasourceFactory>();
var k2Factory = new Mock<IK2DatasourceFactory>();
// Act
var sut = new SunSystemPaymentBroker(xlFactory.Object, hrdbFactory.Object, k2Factory.Object); // I'm assuming you're using constructor injection
sut.ProcessFile(File);
// Assert
xlFactory.Verify(m => m.Create(File), Times.Once);
hrbdFactory.Verify(m => m.Create(), Times.Once);
k2Factory.Verify(m => m.Create(), Times.Once);
For brevity, I've done this as a single test, but breaking into 3 tests with a single "assert" (the verify calls) is more realistic. For TDD you would write each test, before wiring up that method within Process(file).
You may also want to look at having a larger, integration level tests, where you pass in concrete versions of IExcelDatasourceFactory, IK2DatasourceFactory, IDbDatasourceFactory and exercise the system in more depth.
In the book Growing Object-Oriented Software Guided by Tests, this would be defined as an Acceptance Test which would be written before work began, and failing whilst the feature is added in smaller TDD loops of functionality, which work toward the overall feature.
you have two different issues :
1) a method is designed to perform many task
Make your code SOLID, and apply the single responsibility principle.
Split with single responsibility methods : ie responsible for only one task.
2) you want to test a procedure that works by side effect (change environment), not a pure function.
So, I would advice you to split your code in pure functions calls (ie : no side effects).
Read also https://msdn.microsoft.com/en-us/library/aa730844%28v=vs.80%29.aspx
On an Azure Mobile App Services server side app using MVC 5, Web API 2.0, and EF Core 1.0, controllers can be decorated like so to implement token based authentication:
// Server-side EF Core 1.0 / Web API 2 REST API
[Authorize]
public class TodoItemController : TableController<TodoItem>
{
protected override void Initialize(HttpControllerContext controllerContext)
{
base.Initialize(controllerContext);
DomainManager = new EntityDomainManager<TodoItem>(context, Request);
}
// GET tables/TodoItem
public IQueryable<TodoItem> GetAllTodoItems()
{
return Query();
}
...
}
I want to be able to do something similar on the client side where I decorate a method with something like [Authorize] from above, perhaps with a, [Secured], decoration, below:
public class TodoItem
{
string id;
string name;
bool done;
[JsonProperty(PropertyName = "id")]
public string Id
{
get { return id; }
set { id = value;}
}
[JsonProperty(PropertyName = "text")]
public string Name
{
get { return name; }
set { name = value;}
}
[JsonProperty(PropertyName = "complete")]
public bool Done
{
get { return done; }
set { done = value;}
}
[Version]
public string Version { get; set; }
}
// Client side code calling GetAllTodoItems from above
[Secured]
public async Task<ObservableCollection<TodoItem>> GetTodoItemsAsync()
{
try
{
IEnumerable<TodoItem> items = await todoTable
.Where(todoItem => !todoItem.Done)
.ToEnumerableAsync();
return new ObservableCollection<TodoItem>(items);
}
catch (MobileServiceInvalidOperationException msioe)
{
Debug.WriteLine(#"Invalid sync operation: {0}", msioe.
}
catch (Exception e)
{
Debug.WriteLine(#"Sync error: {0}", e.Message);
}
return null;
}
Where [Secured] might be defined something like this:
public class SecuredFilterAttribute : ActionFilterAttribute
{
public override void OnActionExecuting(ActionExecutingContext filterContext)
{
// Check if user is logged in, if not, redirect to the login page.
}
public override void OnActionExecuted(ActionExecutedContext filterContext)
{
// Check some globally accessible member to see if user is logged out.
}
}
Unfortunately, the above code only works in Controllers in MVC 1.0 applications and above according to the Microsoft article on "Creating Custom Action Filters": https://msdn.microsoft.com/en-us/library/dd381609(v=vs.100).aspx
How do I implement something like a "Custom Action Filter" that allows me to use the "[Secured]" decoration in a Mobile App Service client instead of the server? The answer will help me create custom authentication from the client side and keep the code in one location without complicating the implementation, i.e., it is a cross-cutting concern like performance metrics, custom execution plans for repeated attempts, logging, etc.
Complicating the scenario, the client also implements Xamarin.Forms for iOS and has to be a functional Ahead of Time pattern due to iOS's requirement for native code, JIT is not yet possible.
The reason attributes work in the scenarios you describe is because other code is responsible for actually invoking the methods or reading the properties, and this other code will look for the attributes and modify behaviour accordingly. When you are just running C# code, you don't normally get that; there isn't a native way to, say, execute the code in an attribute before a method is executed.
From what you are describing, it sounds like you are after Aspect Oriented Programming. See What is the best implementation for AOP in .Net? for a list of frameworks.
In essence, using an appropriate AOP framework, you can add attributes or other markers and have code executed or inserted at compile time. There are many approaches to it, hence why I am not being very specific, sorry.
You do need to understand that the AOP approach is different from how things like ASP.Net MVC works as AOP will typically modify your runtime code (in my understanding anyway and I'm sure there are variations on that as well).
As to whether AOP is really the way to go will depend on your requirements, but I would proceed with caution - it's not for the faint of heart.
One completely alternative solution to this problem is to look at something like Mediatr or similar to break your logic into a set of commands, which you can call via a message bus. The reason that helps is that you can decorate your message bus (or pipeline) with various types of logic, including authorization logic. That solution is very different from what you are asking for - but may be preferable anyway.
Or just add a single-line authorisation call as the first line inside each method instead of doing it as an attribute...
What you are more generally describing in known by a few different names/terms. The first that comes to mind is "Aspect Oriented Programming" (or AOP for short). It deals with what are known as cross cutting concerns. Im willing to bet you want to do one of two things
Log exceptions/messages in a standardized meaningful way
Record times/performance of areas of your system
And in the generala sense, yes C# is able to do such things. There will be countless online tutorials on how to do so, it is much too broad to answer in this way.
However, the authors of asp.net MVC have very much thought of these things and supply you with many attributes just as you describe, which can be extended as you please, and provide easy access to the pipeline to provide the developer with all the information they need (such as the current route, any parameters, any exception, any authorization/authentication request etc etc)
This would be a good place to start: http://www.strathweb.com/2015/06/action-filters-service-filters-type-filters-asp-net-5-mvc-6/
This also looks good: http://www.dotnetcurry.com/aspnet-mvc/976/aspnet-mvc-custom-action-filter
I have a DLL with some classes and methods. And two applications using it.
One admin-application that needs almost every method and a client-application that only needs parts of the stuff. But big parts of it are used by both of them. Now I want make a DLL with the admin stuff and one with the client stuff.
Duplicating the DLL and edit things manually everytime is horrible.
Maybe conditional compiling helps me but I dont know how to compile the DLL twice with different coditions in one solution with the three projects.
Is there a better approach for this issue than having two different DLLs and manually editing on every change?
In general, you probably don't want admin code exposed on the client side. Since it's a DLL, that code is just waiting to be exploited, because those methods are, by necessity, public. Not to mention decompiling a .NET DLL is trivial and may expose inner-workings of your admin program you really don't want a non-administrator to see.
The best, though not necessarily the "easiest" thing to do, if you want to minimize code duplication, is to have 3 DLLs:
A common library that contains ONLY functions that BOTH applications use
A library that ONLY the admin application will use (or else compile it straight into the application if nothing else uses those functions at all)
A library that ONLY the client application will use (with same caveat as above)
A project that consists of a server, client, and admin client should likely have 3-4 libraries:
Common library, used by all 3
Client library, used by client and server
Admin library, used by server and admin client
Server library, used only by server (or else compile the methods directly into the application)
Have you considered using dependency injection on the common library, some form of constructor injection to determine the rules that need to be applied during execution.
Here's a very simple example:
public interface IWorkerRule
{
string FormatText(string input);
}
internal class AdminRules : IWorkerRule
{
public string FormatText(string input)
{
return input.Replace("!", "?");
}
}
internal class UserRules : IWorkerRule
{
public string FormatText(string input)
{
return input.Replace("!", ".");
}
}
public class Worker
{
private IWorkerRule Rule { get; set; }
public Worker(IWorkerRule rule)
{
Rule = rule;
}
public string FormatText(string text)
{
//generic shared formatting applied to any consumer
text = text.Replace("#", "*");
//here we apply the injected logic
text = Rule.FormatText(text);
return text;
}
}
class Program
{
//injecting admin functions
static void Main()
{
const string sampleText = "This message is #Important# please do something about it!";
//inject the admin rules.
var worker = new Worker(new AdminRules());
Console.WriteLine(worker.FormatText(sampleText));
//inject the user rules
worker = new Worker(new UserRules());
Console.WriteLine(worker.FormatText(sampleText));
Console.ReadLine();
}
}
When run you'll produce this output.
This message is *Important* please do something about it?
This message is *Important* please do something about it.