I have a question regarding clean architecture and durable task framework. But first, let me show you by example what we can do with DTF. DTF enables us to run workflows/orchestrations of individual task in the background. Here is an example:
public class EncodeVideoOrchestration : TaskOrchestration<string, string>
{
public override async Task<string> RunTask(OrchestrationContext context, string input)
{
string encodedUrl = await context.ScheduleTask<string>(typeof (EncodeActivity), input);
await context.ScheduleTask<object>(typeof (EmailActivity), input);
return encodedUrl;
}
}
The TaskOrchestration wires together individual tasks into a workflow. Here is how you define the tasks:
public class EncodeActivity : TaskActivity<string, string>
{
protected override string Execute(TaskContext context, string input)
{
Console.WriteLine("Encoding video " + input);
// TODO : actually encode the video to a destination
return "http://<azurebloblocation>/encoded_video.avi";
}
}
public class EmailActivity : TaskActivity<string, object>
{
protected override object Execute(TaskContext context, string input)
{
// TODO : actually send email to user
return null;
}
}
Pretty straight forward, right? Then you create a worker in Program.cs and register all the tasks and orchestrations:
TaskHubWorker hubWorker = new TaskHubWorker("myvideohub", "connectionDetails")
.AddTaskOrchestrations(typeof (EncodeVideoOrchestration))
.AddTaskActivities(typeof (EncodeActivity), typeof (EmailActivity))
.Start();
Using the DTF client you can actually trigger an orchestration:
TaskHubClient client = new TaskHubClient("myvideohub", "connectionDetails");
client.CreateOrchestrationInstance(typeof (EncodeVideoOrchestration), "http://<azurebloblocation>/MyVideo.mpg");
DTF handles all the magic in the background and can use different storage solutions such as service bus or even mssql.
Say our application is organized into folders like this:
Domain
Application
Infrastructure
UI
In tasks we run application logic / use cases. But the DTF framework itself is infrastructure, right? If so, how would an abstraction of the DTF framework look like in the application layer? Is it even possible to make the application layer unaware of the DTF?
In regards to Clean Architecture approach, if you want to get rid of DTF in the Application layer, you can do following (original repo uses MediatR, so I did as well)
implement TaskActivity as query/command and put it in Application layer
using MediatR;
public class EncodeVideoQuery : IRequest<string>
{
// TODO: ctor
public string Url { get; set; }
}
public class EncodeHandler : IRequestHandler<EncodeVideoQuery, string>
{
public async Task<string> Handle(EncodeVideoQuery input, CancellationToken cancel)
{
Console.WriteLine("Encoding video " + input);
// TODO : actually encode the video to a destination
return "http://<azurebloblocation>/encoded_video.avi";
}
}
public class EmailCommand
{
public string UserEmail { get; set; }
}
public class EmailCommandHandler : IRequestHandler<EmailCommand>
{
public async Task<Unit> Handle(EmailCommand input, CancellationToken cancel)
{
// TODO : actually send email to user
return Unit.Value;
}
}
implement actual DTF classes (I looked up that they support async) and put them into a "UI" layer. There's no UI, but technically it's a console application.
using MediatR;
public class EncodeActivity : TaskActivity<string, string>
{
private readonly ISender mediator;
public EncodeActivity(ISender mediator)
{
this.mediator = mediator;
}
protected override Task<string> ExecuteAsync(TaskContext context, string input)
{
// Perhaps no ability to pass a CancellationToken
return mediator.Send(new EncodeVideoQuery(input));
}
}
I think your question is not really just a single question regarding the code but a request for the whole concept of how to make that main program "unaware" of the specific DTF library you going to use.
Well, it involves several areas of functionality you will need to use in order accomplish that. I added a diagram for how the architecture should look like to achieve what you ask for, however I didn't focus on the syntax there since the question is about architecture and not code itself as I understood it, so treat it as a pseudo code - it is just to deliver the concept.
The key idea is you will have to read the path or name of the DLL you wish to load from a configuration file (such as app.config) but to do that you will need to learn how to create custom configuration elements in a configuration file.
You can read about those in the links:
https://learn.microsoft.com/en-us/dotnet/framework/configure-apps/
https://learn.microsoft.com/en-us/dotnet/api/system.configuration.configuration?view=dotnet-plat-ext-6.0
Next you need to dynamically load the assembly, you can read about how to load assemblies dynamically here https://learn.microsoft.com/en-us/dotnet/framework/app-domains/how-to-load-assemblies-into-an-application-domain
Once you passed that, remember that the DLL you are loading is still something you need to implement and it needs to be aware of the specific DTF Library you wish to reference, however it also implement an interface well known in your application as well.
So basically you will have an interface describing the abstraction your program need from a DTF library (any DTF library) and your Proxy DLL which will be loaded at runtime will act as mediator between that interface which describe that abstraction and the actual implementation of the specific DTF library.
And so, per your questions:
how would an abstraction of the DTF framework look like in the
application layer?
Look at the diagram I provided.
Is it even possible to make the application layer unaware of the DTF?
Yes, like in any language that can support plugins/extensions/proxies
You have to fit your implementation with the Ubiquitous language. In the specific example: Who and when does encoding happen? Whichever entity or service (the client) does the encoding will simply call an IEncode.encode interface that'll take care of the "details" involved in invoking a DTF.
Yes, the definition for DTF is in the Infrastructure and it should be treated like everything else in the infrastructure like Logging or Notifications. That is: The functionality should be put behind an interface that can be injected into the Domain and used by its Domain Clients.
You could wrap the activities in a library that returns simple Tasks, and might mix long-running activities with short-running ones. Something like
public class BusinessContext
{
OrchestrationContext context;
public BusinessContext(OrchestrationContext context)
{
this.context = context;
}
public async Task<int> SendGreeting(string user)
{
return await context.ScheduleTask<string>(typeof(SendGreetingTask), user);
}
public async Task<string> GetUser()
{
return await context.ScheduleTask<string>(typeof(GetUserTask));
}
}
Then the orchestration is a bit cleaner
public override async Task<string> RunTask(OrchestrationContext context, string input)
{
//string user = await context.ScheduleTask<string>(typeof(GetUserTask));
//string greeting = await context.ScheduleTask<string>(typeof(SendGreetingTask), user);
//return greeting;
var bc = new BusinessContext(context);
string user = await bc.GetUser();
string greeting = await bc.SendGreeting(user);
return greeting;
}
Durable Task Framework has already done all the abstractions for you. TaskActivity is your abstraction:
public abstract class TaskActivity<TInput, TResult> : AsyncTaskActivity<TInput, TResult>
{
protected TaskActivity();
protected abstract TResult Execute(TaskContext context, TInput input);
protected override Task<TResult> ExecuteAsync(TaskContext context, TInput input);
}
You can work with TaskActivity type in your Application Layer. You don't care about its implementation. The implementation of TaskActivity goes to lower layers (probably Infrastructure Layer, but some tasks might be more suitable to be defined as a Domain Service, if they contain domain logic)
If you want, you can also group the task activities, for example you can define a base class for Email Activity:
Domain Layer Service (Abstraction)
public abstract class EmailActivityBase : TaskActivity<string, object>
{
public string From { get; set; }
public string To { get; set; }
public string Body { get; set; }
}
This is your abstraction of an Email Activity. You Application Layer is only aware of EmailActivityBase class.
Infrastructure Layer Implementation
The implementation of this class goes to Infrastructure Layer:
Production email implementation
public class EmailActivity : EmailActivityBase
{
protected override object Execute(TaskContext context, string input)
{
// TODO : actually send email to user
return null;
}
}
Test email implementation
public class MockEmailActivity : EmailActivityBase
{
protected override object Execute(TaskContext context, string input)
{
// TODO : create a file in local storage instead of sending an email
return null;
}
}
Where to Put Task Orchestration Code?
Depending on your application, this may change. For example, if you are using AWS you can use AWS lambda for orchestration, if you are using Windows Azure, you can use Azure Automation or you can even create a separate Windows service to execute the tasks (obviously the Windows service will have dependency on your application). Again this really depends on your application but it may not be a bad idea to put these house keeping jobs in a separate module.
Related
We have a common architecture for many of our projects, and this architecture requires some amount of boilerplate that is generic for every project. I'm trying to tie all this boilerplate into a single reusable NuGet package to make maintenance easier, but am running into issues with getting the DI to work with me.
Specifically, I'm struggling with the concept of services. In the NuGet, I'll have to define basic service interfaces so I can hook some pipelines to use these services. However, every application that will be using this NuGet will need to be able to extend these services with application specific methods.
Let's go over an example with the "User authentication pipeline", which should answer common questions like "Is this user in role x" and application specific questions like "Can this user modify z based on its owner y".
First, our application layer is structured based on CQRS using a common interface, which is implemented by every Query and Command:
public interface IApplicationRequestBase<TRet> : IRequest<TRet> { //IRequest from MediatR
Task<bool> Authorize(IUserServiceBase service, IPersistenceContextBase ctx);
void Validate();
}
IUserServiceBase is an interface providing access to the current user (I'm skipping the IPersistenceContextBase, which is just an empty interface):
public interface IUserServiceBase {
string? CurrentUserExternalId { get; }
bool IsUserInRole(params string[] roleNames);
...
And in the authentication pipeline
public class RequestAuthorizationBehaviour<TRequest, TResponse> : IPipelineBehavior<TRequest, TResponse>
where TRequest : IApplicationRequestBase<TResponse> { //MediatR IPipelineBehavior
private readonly IUserServiceBase _userService;
private readonly IPersistenceContextBase _ctx;
public RequestAuthorizationBehaviour(IUserServiceBase userService, IPersistenceContextBase ctx) {
_userService = userService;
_ctx = ctx;
}
public async Task<TResponse> Handle(TRequest request, CancellationToken cancellationToken, RequestHandlerDelegate<TResponse> next) {
if (await request.Authorize(_userService, _ctx)) {
return await next();
}
throw new UnauthorizedAccessException();
}
}
}
And finally the NuGet DI definition:
public static class DependencyInjection {
public static IServiceCollection AddApplicationInfra(this IServiceCollection services) {
...
services.AddTransient(typeof(IPipelineBehavior<,>), typeof(RequestAuthorizationBehaviour<,>));
return services;
}
}
All well and good in the NuGet side, now the application. This approach has me trying to extend the interfaces directly, and this is the easiest way to visualize what I wish to accomplish.
The application has a bunch of app-specific authorization checks, so we have a custom interface for that:
public interface IUserService : IUserServiceBase {
public string LocalUserIdClaimKey { get; }
Guid CurrentUserLocalId { get; }
/// <summary>
/// Shortcut for checking if the user has any role allowing read access to notifications
/// </summary>
bool CurrentUserCanReadNotifications { get; }
...
The UserService class implements all the functionality required in the IUserService interface, meaning the IUserServiceBase methods as well. It is defined in a different project (Infrastructure) than the interface (Application).
public class UserService : IUserService {
private readonly IHttpContextAccessor _contextAccessor;
public UserService(IHttpContextAccessor contextAccessor) {
_contextAccessor = contextAccessor;
}
public string? CurrentUserExternalId {
get {
var user = _contextAccessor.HttpContext.User;
if (user != null) {
return user.FindFirst(JwtClaimTypes.Subject)?.Value;
}
return null;
}
}
...
And finally, in our Command, where it all should come together:
public class UpdateSubsequentTreatmentFacilitiesCommand : IApplicationRequestBase<int> {
public async Task<bool> Authorize(IUserService service, IPersistenceContext ctx) {
//Application specific authorization check
}
public void Validate() {
}
Now, here we get a build error, stating that 'UpdateSubsequentTreatmentFacilitiesCommand' does not implement interface member 'IApplicationRequestBase<int>.Authorize(IUserServiceBase, IPersistenceContextBase)'. This is probably what I'm encountering here (though I still can't figure out why exactly...).
So, to reiterate:
Goal is to package common project boilerplate to a single NuGet
We need to be able to extend the services defined in the NuGet with application specific functionality
IApplicationRequestBase defines the type of the service parameter as IUserServiceBase, but UpdateSubsequentTreatmentFacilitiesCommand tried to use IUserService. OO programming and inheritance doesn't let you change method signatures.
If you can change IApplicationRequestBase, adding a TService generic parameter will let you get around it:
public interface IApplicationRequestBase<TRet, TService> : IRequest<TRet>
where TService is IUserServiceBase
{
Task<bool> Authorize(TService service, IPersistenceContextBase ctx);
void Validate();
}
public class UpdateSubsequentTreatmentFacilitiesCommand : IApplicationRequestBase<int, IUserService>
{
public async Task<bool> Authorize(IUserService service, IPersistenceContext ctx)
{
// method body
}
// rest of class
}
However, given that IUserService is an interface, if it is the only thing that extends/implements IUserServiceBase, then it sounds like a case of overengineering. There's a saying that perfection is the enemy of good. In other words, attempting to be too generic, too reusable, where it's not actually needed, is just slowing down progress. By all means, strive to have a high quality codebase, but you also need to be pragmatic.
If other apps that use IApplicationRequestBase have their own user service, not the same IUserService as your app, then you'll need to find another approach, given that C# is a strongly typed language. You could just typecast the IUserServiceBase into an IUserService in the method body. Rather than extending the interface, you could have an extension method. If you're creative, you might think of other approaches as well.
However, looking at IUserService, my guess is that it exists only to improve performance of checking certain commonly used roles. If I'm wrong and it's about convenience and not performance, then an extension method should be sufficient. If the concern is performance, then make sure that the implementation of IsUserInRole does caching. Looking up a string still won't be as fast as returning a property's backing field. But changing your software architecture to improve performance for something you haven't profiled to confirm that it is a performance bottleneck is the definition of premature optimization. If IsUserInRole does basic caching, you'll probably find the the performance is good enough, and helper/extension methods solve whatever readability/code quality issue you're trying to solve.
My ASP.NET Core application is using our self-designed pipelines to process requests. Every pipeline contains 1+ blocks, and the number of blocks have no any limit. it can be up to 200+ blocks in real instance, the pipeline will go through all blocks by a sequence from a configuration, like:
Pipeline<DoActionsPipeline>().AddBlock<DoActionAddUserBlock>().AddBlock<DoActionAddUserToRoleBlock>()...
Like above example(just an example), and there are 200+ blocks configured in this pipeline, the blocks could be DoActionAddUserBlock, DoActionAddUserToRoleBlock, DoActionAddAddressToUserBlock, and so on. many actions are mixed in one pipeline. (Please don't ask why mix them, it's just an example, it doesn't matter to my question.)
For this example, in each block, we will check the action name first, if match, then run logics. but this is pretty bad, it has to instance all blocks and go throgh all of them to get a request done.
Here is sample code, not very good, but it shows my pain:
public class DoActionAddUserBlock : BaseBlock<User, User, Context>
{
public override User Execute(User arg, Context context)
{
if (context.ActionName != "AddUser")
{
return arg;
}
return AddUser(arg);
}
protected User AddUser(User user)
{
return user;
}
}
public abstract class BaseBlock<TArg, TResult, TContext>
{
public abstract TResult Execute(TArg arg, TContext context);
}
public class Context
{
public string ActionName { get; set; }
}
public class User
{
}
I want to avoid instancing blocks by conditions, I think it should be in pipeline-configuration level. how can I reach this? Attributes? or something others.
[Condition("Action==AddUser")] // or [Action("AddUser")] // or [TypeOfArg("User")]
public class DoActionAddUserBlock : BaseBlock<User, User, Context>
{
public override User Execute(User arg, Context context)
{
return AddUser(arg);
}
//...
}
Please show us the Pipeline<T>() method (is a method or a class?), because it's essential for an accurate answer.
Anyway i want to try my best with the current infos.
Your goal is "i want to conditionally instance blocks", so you have to move your condition in a out-of-instance context, something you can do with attributes:
[AttributeUsage(AttributeTargets.Class, AllowMultiple = false)]
public class ActionNameAttribute : Attribute
{
public ActionNameAttribute(string name)
{
this.Name = name;
}
public string Name { get; }
}
[ActionName(nameof(AddUser))]
public class DoActionAddUserBlock : BaseBlock<User, User, Context>
{
public override User Execute(User arg, Context context)
{
return AddUser(arg);
}
}
Then, do the check into the .AddBlock<T>() method (that, i guess, is something like that):
public YourUnknownType<T> AddBlock<TBlock>()
{
var type = typeof(TBlock);
var attributes = attributes.GetCustomAttributes(typeof(ActionNameAttribute), inherit: true); // or false if you don't need inheritation
var attribute = attributes.FirstOrDefault() as ActionNameAttribute;
if (attribute.Name == this.Context.ActioName)
{
// place here the block init
}
return AnythingYouActuallyReturn();
}
Hope this helps!
IMO
you should define different pipelines for different usage. That's a design pattern that should be used only for some particular cases. Maybe it is not good pattern in your case?
I think that it shouldn't be in pipeline responsibility to check the action name and MAYBE run logic. If you define a pipeline for some logic it should just "go with the flow".
Therefore, pipelines should be build once on project startup and initializing whole pipeline just once is good.
Please think about if using pipelines is good in your scenario.
I've built a simple pipeline with builder and steps you can check it here. It's in polish but all the code is in English so you might get the point.
I'm making an application that uses an external API. But I don't want my application to be dependant on the API. So I have been reading about how to achieve this. I read that the thing I want is loose coupling. I want to loosely couple my class that uses the external API from the rest of my application. My question is how do I achieve this. If read about different design patterns, I can't find one that helps with my problem.
public class GoogleCalendarService
{
private const string CalendarId = ".....";
private CalendarService Authenticate(string calendarId)
{
...
}
public void Create(Booking newBooking, string userId)
{
...
InsertEvent(newEvent, userId);
}
private void Insert(Event newEvent, string userId)
{
call authenticate account
....
}
public List<Booking> GetEvents()
{
call authenticate account
...
}
}
Above is my code for the class that uses the external API. In the rest of my application I use this class the following way:
public class MyApplication
{
private void MyFunction()
{
GoogleCalendarService googleCalendarService = new GoogleCalendarService();
googleCalendarService.CreateEvent(..., ...)
}
}
I do this on multiple places in my application. So my question is: How can I loosely couple the API class from the rest?
Edit: I probably want a general calendar service interface that makes it easier to replace the google calendar service with an other calendar service when needed.
that makes it easier to replace the google calendar service with an other calendar service
The main pattern you will want to look at is Adapter. But you would want to use that in combination with Dependency Injection.
The DI first:
public class MyApplication
{
// constructor injection
private IGeneralCalendarService _calendarService;
public MyApplication(IGeneralCalendarService calendarService)
{
_calendarService = calendarService;
}
private void MyFunction()
{
_calendarService.CreateEvent(..., ...)
}
}
And the Adapter would look something like
public class GoogleCalendarServiceAdapter : IGeneralCalendarService
{
// implement the interface by calliong the Google API.
}
In addition you will need generic classes for Event etc. They belong to the same layer as the interface.
You need to write a wrapper around that API. And rewrite every Output/Input of that API with your wrapper IO. And after that, you can take advantage of Dependancy Injection to use your own code. By this way you can have an abstraction layer around that API
In our layered architecture I am designing a BLL logic component called AppHandover and have written the basic high level code for this. I want it to follow the SOLID-principles and be loosly coupled, adopt separation of concern and be testable.
Here is what AppHandover should do
Check if User owns app. If not throw an error
remove history if possible (ie no more apps are assigned to user)
transfer the ownership to the next instance
Quesion is, am I on the right track and does the following sample seem SOLID?
public interface ITransferOwnership
{
void TransferOwnership(string userId, string appId, TransferDirection transferDirection);
}
public interface IOwnershipVerification
{
bool UserOwnsApp(string userId, int budgetId, string appId);
}
public interface IPreserveHistoryCheck
{
bool ShouldDeleteTemporaryBudgetData(string userId, int budgetId);
}
public interface IRemoveHistory
{
void DeleteTemporaryBudgetData(string userId, int budgetId);
}
Handover process implementation
public class AppHandoverProcess : KonstruktDbContext, ITransferOwnership
{
private IOwnershipVerification _ownerShipVerification;
private IPreserveHistoryCheck _preserveHistory;
private IRemoveHistory _removeHistory;
private ITransferOwnerShip _transferOwnership;
public AppHandoverProcess()
{
}
public AppHandoverProcess(IOwnershipVerification ownerShipVerification,
IPreserveHistoryCheck preserveHistory,
IRemoveHistory removeHistory)
{
_ownerShipVerification = ownerShipVerification;
_preserveHistory = preserveHistory;
_removeHistory = removeHistory;
}
public void PerformAppHandover(string userId, string appId, int budgetId)
{
if (_ownerShipVerification.UserOwnsApp(userId,budgetId,appId)) {
if (_preserveHistory.ShouldDeleteTemporaryBudgetData(userId, budgetId))
{
_removeHistory.DeleteTemporaryBudgetData(userId, budgetId);
}
//handover logic here..
_transferOwnership.TransferOwnership(userId, appId, TransferDirection.Forward);
}
else
{
throw new Exception("AppHandover: User does not own app, data cannot be handed over");
}
}
}
Concerning the code you outlined above I definitely think you're on the right track. I would push the design a little further and define TransferOwnership as an additional interface.
Following this approach your AppHandoverProcess is completely decoupled from it's client and the behaviour will be defined in the service configuration.
Enforcing an isolation for the TransferOwnership will allow you to easily UnitTest any object implementing the interface without the need to mock AppHandoverProcess dependency.
Also any AppHandoverProcess test should be trivial as the only thing you'll need to make sure is the your services are invoke or that the exception is thrown.
Hope this make sense,
Regards.
I would make KonstruktDbContext as an injectable dependency. AppHandoverprocess should not inherit from it as it looks like it is a different responsibility.
I'm using Drum which provides a generic class `UriMaker:
public class UriMaker<TController>
{
// I need use this one
public UriMaker(UriMakerContext context, HttpRequestMessage request) { }
public UriMaker(Func<MethodInfo, RouteEntry> mapper, UrlHelper urlHelper) { }
}
Used like this:
public class UserController : ApiController
{
public UserController(UriMaker<UserController> urlMaker) {}
}
I've used to register it with Unity:
container.RegisterType(typeof(UriMaker<>),
new InjectionConstructor(typeof(UriMakerContext), typeof(HttpRequestMessage)));
but now migrating to Simple Injector. I already have this:
UriMakerContext uriMaker = config.MapHttpAttributeRoutesAndUseUriMaker();
container.RegisterSingle(uriMakerContext);
So how now register UriMaker<> itself?
Although it is possible to configure Simple Injector to allow injecting an UriMaker<TController> directly into your controllers, I strongly advice against this for multiple reasons.
First of all, you should strive to minimize the dependencies your application takes on external libraries. This can easily be done by defining an application specific abstraction (conforming the ISP).
Second, injecting the UriMaker directly makes your extremely hard to test, since the UriMaker is pulled into your test code, while it assumes an active HTTP request and assumes the Web API route system to be configured correctly. These are all things you don't want your test code to be dependent upon.
Last, it makes verifying the object graph harder, since the UriMaker depends on an HttpRequestMessage, which is a runtime value. In general, runtime values should not be injected into the constructors of your services. You should build up your object graph with components (the stuff that contains the application's behavior) and you send runtime data through the object graph after construction.
So instead, I suggest the following abstraction:
public interface IUrlProvider
{
Uri UriFor<TController>(Expression<Action<TController>> action);
}
Now your controllers can depend on this IUrlProvider instead of depending on an external library:
public class UserController : ApiController
{
private readonly IUrlProvider urlProvider;
public UserController(IUrlProvider urlProvider)
{
this.urlProvider = urlProvider;
}
public string Get()
{
this.urlProvider.UriFor<HomeController>(c => c.SomeFancyAction());
}
}
Under the covers you of course still need to call Drum, and for this you need to define a proxy implementation for IUrlProvider:
public class DrumUrlProvider : IUrlProvider
{
private readonly UriMakerContext context;
private readonly Func<HttpRequestMessage> messageProvider;
public DrumUrlProvider(UriMakerContext context,
Func<HttpRequestMessage> messageProvider)
{
this.context = context;
this.messageProvider= messageProvider;
}
public Uri UriFor<TController>(Expression<Action<TController>> action)
{
HttpRequestMessage message = this.messageProvider.Invoke();
var maker = new UriMaker<TController>(this.context, message);
return maker.UriFor(action);
}
}
This implementation can be registered as singleton in the following way:
container.EnableHttpRequestMessageTracking(config);
UriMakerContext uriMakerContext =
config.MapHttpAttributeRoutesAndUseUriMaker();
IUrlProvider drumProvider = new DrumUrlProvider(uriMakerContext,
() => container.GetCurrentHttpRequestMessage());
container.RegisterSingle<IUrlProvider>(drumProvider);
This example uses the Simple Injector Web API integration package to allow retrieving the current request's HttpRequestMessage using the EnableHttpRequestMessageTracking and GetCurrentHttpRequestMessage extension methods as explained here.