Message inheritance and getting access to the message from consumer - c#

First of all, I'm aware that inheritance is not quite good and we should be careful with it, but...
In a system I have set of different commands that I publish to the bus in order to trigger appropriate services. Now I want to log the fact that some of messages (not all messages, just a subset) were issued.
I was thinking that the ideal way to handle it would be some separate service that just subscribed to the list of messages that I'm interested in, catches it and logs it.
What is the best way to implement it?
I was thinking about using inheritance like this
public interface IAction
{
}
public interface TestCommand : IAction
{
Guid Id { get; }
string Message { get; }
}
public class TestHandler : IConsumer<IAction>, IConsumer<TestCommand>
{
public async Task Consume(ConsumeContext<IAction> context)
{
var action = context.Message;
}
public async Task Consume(ConsumeContext<TestCommand> context)
{
var cmd = context.Message;
}
}
but the problem here that IAction consumer doesn't have access to the whole message, so I just can't get the content to log.
Is it possible to solve this somehow and get access to the message content? Or, this approach is totally wrong and I should use something else?

You can use context.TryGetPayload<YourMessageType>(out var message) but again you need to know the message type.
You can also consume (or get payload of) the JObject, which will give you everything in plain JSON.
Observers is better for logging, check the Audit feature to see how it can be done.

You should move the declaration of your
string Message { get; }
to
public interface IAction
I mean, every command will havec a Message and an Id right ?

Related

How to abstract the DTF framework in the application layer?

I have a question regarding clean architecture and durable task framework. But first, let me show you by example what we can do with DTF. DTF enables us to run workflows/orchestrations of individual task in the background. Here is an example:
public class EncodeVideoOrchestration : TaskOrchestration<string, string>
{
public override async Task<string> RunTask(OrchestrationContext context, string input)
{
string encodedUrl = await context.ScheduleTask<string>(typeof (EncodeActivity), input);
await context.ScheduleTask<object>(typeof (EmailActivity), input);
return encodedUrl;
}
}
The TaskOrchestration wires together individual tasks into a workflow. Here is how you define the tasks:
public class EncodeActivity : TaskActivity<string, string>
{
protected override string Execute(TaskContext context, string input)
{
Console.WriteLine("Encoding video " + input);
// TODO : actually encode the video to a destination
return "http://<azurebloblocation>/encoded_video.avi";
}
}
public class EmailActivity : TaskActivity<string, object>
{
protected override object Execute(TaskContext context, string input)
{
// TODO : actually send email to user
return null;
}
}
Pretty straight forward, right? Then you create a worker in Program.cs and register all the tasks and orchestrations:
TaskHubWorker hubWorker = new TaskHubWorker("myvideohub", "connectionDetails")
.AddTaskOrchestrations(typeof (EncodeVideoOrchestration))
.AddTaskActivities(typeof (EncodeActivity), typeof (EmailActivity))
.Start();
Using the DTF client you can actually trigger an orchestration:
TaskHubClient client = new TaskHubClient("myvideohub", "connectionDetails");
client.CreateOrchestrationInstance(typeof (EncodeVideoOrchestration), "http://<azurebloblocation>/MyVideo.mpg");
DTF handles all the magic in the background and can use different storage solutions such as service bus or even mssql.
Say our application is organized into folders like this:
Domain
Application
Infrastructure
UI
In tasks we run application logic / use cases. But the DTF framework itself is infrastructure, right? If so, how would an abstraction of the DTF framework look like in the application layer? Is it even possible to make the application layer unaware of the DTF?
In regards to Clean Architecture approach, if you want to get rid of DTF in the Application layer, you can do following (original repo uses MediatR, so I did as well)
implement TaskActivity as query/command and put it in Application layer
using MediatR;
public class EncodeVideoQuery : IRequest<string>
{
// TODO: ctor
public string Url { get; set; }
}
public class EncodeHandler : IRequestHandler<EncodeVideoQuery, string>
{
public async Task<string> Handle(EncodeVideoQuery input, CancellationToken cancel)
{
Console.WriteLine("Encoding video " + input);
// TODO : actually encode the video to a destination
return "http://<azurebloblocation>/encoded_video.avi";
}
}
public class EmailCommand
{
public string UserEmail { get; set; }
}
public class EmailCommandHandler : IRequestHandler<EmailCommand>
{
public async Task<Unit> Handle(EmailCommand input, CancellationToken cancel)
{
// TODO : actually send email to user
return Unit.Value;
}
}
implement actual DTF classes (I looked up that they support async) and put them into a "UI" layer. There's no UI, but technically it's a console application.
using MediatR;
public class EncodeActivity : TaskActivity<string, string>
{
private readonly ISender mediator;
public EncodeActivity(ISender mediator)
{
this.mediator = mediator;
}
protected override Task<string> ExecuteAsync(TaskContext context, string input)
{
// Perhaps no ability to pass a CancellationToken
return mediator.Send(new EncodeVideoQuery(input));
}
}
I think your question is not really just a single question regarding the code but a request for the whole concept of how to make that main program "unaware" of the specific DTF library you going to use.
Well, it involves several areas of functionality you will need to use in order accomplish that. I added a diagram for how the architecture should look like to achieve what you ask for, however I didn't focus on the syntax there since the question is about architecture and not code itself as I understood it, so treat it as a pseudo code - it is just to deliver the concept.
The key idea is you will have to read the path or name of the DLL you wish to load from a configuration file (such as app.config) but to do that you will need to learn how to create custom configuration elements in a configuration file.
You can read about those in the links:
https://learn.microsoft.com/en-us/dotnet/framework/configure-apps/
https://learn.microsoft.com/en-us/dotnet/api/system.configuration.configuration?view=dotnet-plat-ext-6.0
Next you need to dynamically load the assembly, you can read about how to load assemblies dynamically here https://learn.microsoft.com/en-us/dotnet/framework/app-domains/how-to-load-assemblies-into-an-application-domain
Once you passed that, remember that the DLL you are loading is still something you need to implement and it needs to be aware of the specific DTF Library you wish to reference, however it also implement an interface well known in your application as well.
So basically you will have an interface describing the abstraction your program need from a DTF library (any DTF library) and your Proxy DLL which will be loaded at runtime will act as mediator between that interface which describe that abstraction and the actual implementation of the specific DTF library.
And so, per your questions:
how would an abstraction of the DTF framework look like in the
application layer?
Look at the diagram I provided.
Is it even possible to make the application layer unaware of the DTF?
Yes, like in any language that can support plugins/extensions/proxies
You have to fit your implementation with the Ubiquitous language. In the specific example: Who and when does encoding happen? Whichever entity or service (the client) does the encoding will simply call an IEncode.encode interface that'll take care of the "details" involved in invoking a DTF.
Yes, the definition for DTF is in the Infrastructure and it should be treated like everything else in the infrastructure like Logging or Notifications. That is: The functionality should be put behind an interface that can be injected into the Domain and used by its Domain Clients.
You could wrap the activities in a library that returns simple Tasks, and might mix long-running activities with short-running ones. Something like
public class BusinessContext
{
OrchestrationContext context;
public BusinessContext(OrchestrationContext context)
{
this.context = context;
}
public async Task<int> SendGreeting(string user)
{
return await context.ScheduleTask<string>(typeof(SendGreetingTask), user);
}
public async Task<string> GetUser()
{
return await context.ScheduleTask<string>(typeof(GetUserTask));
}
}
Then the orchestration is a bit cleaner
public override async Task<string> RunTask(OrchestrationContext context, string input)
{
//string user = await context.ScheduleTask<string>(typeof(GetUserTask));
//string greeting = await context.ScheduleTask<string>(typeof(SendGreetingTask), user);
//return greeting;
var bc = new BusinessContext(context);
string user = await bc.GetUser();
string greeting = await bc.SendGreeting(user);
return greeting;
}
Durable Task Framework has already done all the abstractions for you. TaskActivity is your abstraction:
public abstract class TaskActivity<TInput, TResult> : AsyncTaskActivity<TInput, TResult>
{
protected TaskActivity();
protected abstract TResult Execute(TaskContext context, TInput input);
protected override Task<TResult> ExecuteAsync(TaskContext context, TInput input);
}
You can work with TaskActivity type in your Application Layer. You don't care about its implementation. The implementation of TaskActivity goes to lower layers (probably Infrastructure Layer, but some tasks might be more suitable to be defined as a Domain Service, if they contain domain logic)
If you want, you can also group the task activities, for example you can define a base class for Email Activity:
Domain Layer Service (Abstraction)
public abstract class EmailActivityBase : TaskActivity<string, object>
{
public string From { get; set; }
public string To { get; set; }
public string Body { get; set; }
}
This is your abstraction of an Email Activity. You Application Layer is only aware of EmailActivityBase class.
Infrastructure Layer Implementation
The implementation of this class goes to Infrastructure Layer:
Production email implementation
public class EmailActivity : EmailActivityBase
{
protected override object Execute(TaskContext context, string input)
{
// TODO : actually send email to user
return null;
}
}
Test email implementation
public class MockEmailActivity : EmailActivityBase
{
protected override object Execute(TaskContext context, string input)
{
// TODO : create a file in local storage instead of sending an email
return null;
}
}
Where to Put Task Orchestration Code?
Depending on your application, this may change. For example, if you are using AWS you can use AWS lambda for orchestration, if you are using Windows Azure, you can use Azure Automation or you can even create a separate Windows service to execute the tasks (obviously the Windows service will have dependency on your application). Again this really depends on your application but it may not be a bad idea to put these house keeping jobs in a separate module.

What pattern can I use to avoid instancing unnecessary blocks from pipeline?

My ASP.NET Core application is using our self-designed pipelines to process requests. Every pipeline contains 1+ blocks, and the number of blocks have no any limit. it can be up to 200+ blocks in real instance, the pipeline will go through all blocks by a sequence from a configuration, like:
Pipeline<DoActionsPipeline>().AddBlock<DoActionAddUserBlock>().AddBlock<DoActionAddUserToRoleBlock>()...
Like above example(just an example), and there are 200+ blocks configured in this pipeline, the blocks could be DoActionAddUserBlock, DoActionAddUserToRoleBlock, DoActionAddAddressToUserBlock, and so on. many actions are mixed in one pipeline. (Please don't ask why mix them, it's just an example, it doesn't matter to my question.)
For this example, in each block, we will check the action name first, if match, then run logics. but this is pretty bad, it has to instance all blocks and go throgh all of them to get a request done.
Here is sample code, not very good, but it shows my pain:
public class DoActionAddUserBlock : BaseBlock<User, User, Context>
{
public override User Execute(User arg, Context context)
{
if (context.ActionName != "AddUser")
{
return arg;
}
return AddUser(arg);
}
protected User AddUser(User user)
{
return user;
}
}
public abstract class BaseBlock<TArg, TResult, TContext>
{
public abstract TResult Execute(TArg arg, TContext context);
}
public class Context
{
public string ActionName { get; set; }
}
public class User
{
}
I want to avoid instancing blocks by conditions, I think it should be in pipeline-configuration level. how can I reach this? Attributes? or something others.
[Condition("Action==AddUser")] // or [Action("AddUser")] // or [TypeOfArg("User")]
public class DoActionAddUserBlock : BaseBlock<User, User, Context>
{
public override User Execute(User arg, Context context)
{
return AddUser(arg);
}
//...
}
Please show us the Pipeline<T>() method (is a method or a class?), because it's essential for an accurate answer.
Anyway i want to try my best with the current infos.
Your goal is "i want to conditionally instance blocks", so you have to move your condition in a out-of-instance context, something you can do with attributes:
[AttributeUsage(AttributeTargets.Class, AllowMultiple = false)]
public class ActionNameAttribute : Attribute
{
public ActionNameAttribute(string name)
{
this.Name = name;
}
public string Name { get; }
}
[ActionName(nameof(AddUser))]
public class DoActionAddUserBlock : BaseBlock<User, User, Context>
{
public override User Execute(User arg, Context context)
{
return AddUser(arg);
}
}
Then, do the check into the .AddBlock<T>() method (that, i guess, is something like that):
public YourUnknownType<T> AddBlock<TBlock>()
{
var type = typeof(TBlock);
var attributes = attributes.GetCustomAttributes(typeof(ActionNameAttribute), inherit: true); // or false if you don't need inheritation
var attribute = attributes.FirstOrDefault() as ActionNameAttribute;
if (attribute.Name == this.Context.ActioName)
{
// place here the block init
}
return AnythingYouActuallyReturn();
}
Hope this helps!
IMO
you should define different pipelines for different usage. That's a design pattern that should be used only for some particular cases. Maybe it is not good pattern in your case?
I think that it shouldn't be in pipeline responsibility to check the action name and MAYBE run logic. If you define a pipeline for some logic it should just "go with the flow".
Therefore, pipelines should be build once on project startup and initializing whole pipeline just once is good.
Please think about if using pipelines is good in your scenario.
I've built a simple pipeline with builder and steps you can check it here. It's in polish but all the code is in English so you might get the point.

NServiceBus events lost when published in separate thread

I've been working on getting long running messages working with NServiceBus on an Azure transport. Based off this document, I thought I could get away with firing off the long process in a separate thread, marking the event handler task as complete and then listening for custom OperationStarted or OperationComplete events. I noticed the OperationComplete event is not received by my handlers most cases. In fact, the only time it is received is when I publish it immediately after the OperationStarted event is published. Any actual processing in between somehow prevents the completion event from being received. Here is my code:
Abstract class used for long running messages
public abstract class LongRunningOperationHandler<TMessage> : IHandleMessages<TMessage> where TMessage : class
{
protected ILog _logger => LogManager.GetLogger<LongRunningOperationHandler<TMessage>>();
public Task Handle(TMessage message, IMessageHandlerContext context)
{
var opStarted = new OperationStarted
{
OperationID = Guid.NewGuid(),
OperationType = typeof(TMessage).FullName
};
var errors = new List<string>();
// Fire off the long running task in a separate thread
Task.Run(() =>
{
try
{
_logger.Info($"Operation Started: {JsonConvert.SerializeObject(opStarted)}");
context.Publish(opStarted);
ProcessMessage(message, context);
}
catch (Exception ex)
{
errors.Add(ex.Message);
}
finally
{
var opComplete = new OperationComplete
{
OperationType = typeof(TMessage).FullName,
OperationID = opStarted.OperationID,
Errors = errors
};
context.Publish(opComplete);
_logger.Info($"Operation Complete: {JsonConvert.SerializeObject(opComplete)}");
}
});
return Task.CompletedTask;
}
protected abstract void ProcessMessage(TMessage message, IMessageHandlerContext context);
}
Test Implementation
public class TestLongRunningOpHandler : LongRunningOperationHandler<TestCommand>
{
protected override void ProcessMessage(TestCommand message, IMessageHandlerContext context)
{
// If I remove this, or lessen it to something like 200 milliseconds, the
// OperationComplete event gets handled
Thread.Sleep(1000);
}
}
Operation Events
public sealed class OperationComplete : IEvent
{
public Guid OperationID { get; set; }
public string OperationType { get; set; }
public bool Success => !Errors?.Any() ?? true;
public List<string> Errors { get; set; } = new List<string>();
public DateTimeOffset CompletedOn { get; set; } = DateTimeOffset.UtcNow;
}
public sealed class OperationStarted : IEvent
{
public Guid OperationID { get; set; }
public string OperationType { get; set; }
public DateTimeOffset StartedOn { get; set; } = DateTimeOffset.UtcNow;
}
Handlers
public class OperationHandler : IHandleMessages<OperationStarted>
, IHandleMessages<OperationComplete>
{
static ILog logger = LogManager.GetLogger<OperationHandler>();
public Task Handle(OperationStarted message, IMessageHandlerContext context)
{
return PrintJsonMessage(message);
}
public Task Handle(OperationComplete message, IMessageHandlerContext context)
{
// This is not hit if ProcessMessage takes too long
return PrintJsonMessage(message);
}
private Task PrintJsonMessage<T>(T message) where T : class
{
var msgObj = new
{
Message = typeof(T).Name,
Data = message
};
logger.Info(JsonConvert.SerializeObject(msgObj, Formatting.Indented));
return Task.CompletedTask;
}
}
I'm certain that the context.Publish() calls are being hit because the _logger.Info() calls are printing messages to my test console. I've also verified they are hit with breakpoints. In my testing, anything that runs longer than 500 milliseconds prevents the handling of the OperationComplete event.
If anyone can offer suggestions as to why the OperationComplete event is not hitting the handler when any significant amount of time has passed in the ProcessMessage implementation, I'd be extremely grateful to hear them. Thanks!
-- Update --
In case anyone else runs into this and is curious about what I ended up doing:
After an exchange with the developers of NServiceBus, I decided on using a watchdog saga that implemented the IHandleTimeouts interface to periodically check for job completion. I was using saga data, updated when the job was finished, to determine whether to fire off the OperationComplete event in the timeout handler. This presented an other issue: when using In-Memory Persistence, the saga data was not persisted across threads even when it was locked by each thread. To get around this, I created an interface specifically for long running, in-memory data persistence. This interface was injected into the saga as a singleton, and thus used to read/write saga data across threads for long running operations.
I know that In-Memory Persistence is not recommended, but for my needs configuring another type of persistence (like Azure tables) was overkill; I simply want the OperationComplete event to fire under normal circumstances. If a reboot happens during a running job, I don't need to persist the saga data. The job will be cut short anyway and the saga timeout will handle firing the OperationComplete event with an error if the job runs longer than a set maximum time.
The cause of this is that if ProcessMessage is fast enough, you might get the current context before it gets invalidated, such as being disposed.
By returning from Handle successfully, you're telling NServiceBus: "I'm done with this message", so it may do what it wants with the context as well, such as invalidating it. In the background processor, you need an endpoint instance, not a message context.
By the time the new task starts running, you don't know if Handle has returned or not, so you should just consider the message has already been consumed and is thus unrecoverable. If errors happen in your separate task, you can't retry them.
Avoid long running processes without persistence. The sample you mention has a server that stores a work item from a message, and a process that polls this storage for work items. Perhaps not ideal, in case you scale out processors, but it won't lose messages.
To avoid constant polling, merge the server and the processor, poll inconditionally once when it starts, and in Handle schedule a polling task. Take care for this task to only poll if no other polling task is running, otherwise it may become worse than constant polling. You may use a semaphore to control this.
To scale out, you must have more servers. You need to measure if the cost of N processors polling is greater than sending to N servers in a round-robin fashion, for some N, to know which approach actually performs better. In practice, polling is good enough for a low N.
Modifying the sample for multiple processors may require less deployment and configuration effort, you just add or take processors, while adding or removing servers needs changing their enpoints in all places (e.g. config files) that point to them.
Another approach would be to break the long process into steps. NServiceBus has sagas. It's an approach usually implemented for a know or bounded amount of steps. For an unknown amount of steps, it's still feasible, although some might consider it an abuse of the seemingly intended purpose of sagas.

Generic http result in asp.net mvc

I was already involved in couple of MVC projects, and in almost all of them i saw similar logic on some actions.
We often return object like this:
public class HttpPrjNameResult<T> {
public PrjNameStatus Status { get; set; }
public string Message { get; set; }
public T data{ get; set; }
}
So i wonder :
is there any standart MVC feature for that?
If no - why?
Or may be i use wrong pattern to write code?
UDPATE:
I will update question little bit.
Let's say I'm creating web api, with method UpdateReports, which returns list of updated entity:
public HttpTestResult<List<Report>> UpdateReports(IEnumerable<Reports> reports){
try{
var res = SaveReports(reports);
return new HttpTestResult<List<Report>>{
Status = TestStatus.Success,
Data = res
}
}
catch(Exception e){
logger.Error(e);
return new HttpTestResult<Object>{
Status = TestStatus.Error,
Message = "Error while saving reports"
}
}
}
And i see such logic useful all over the project, i guess more in API style, not in pure mvc, but still.
The question is - am i doing something wrong so this is kinda reinvent wheel, and there is already built in features for that kind of logic
If I understand your question correctly, you are asking how to return either a success or failure result based on whether you encounter an exception?
You may wish to look at the IExceptionFilter (MVC) or IExceptionFilter (Http) interface. These filters will listen out for any exceptions, and perform some custom action that you define, for example (MVC example):
public void OnException(ExceptionContext filterContext)
{
//perform some custom action, e.g. logging
_logger.Log(filterContext.Exception);
//return a particular status
filterContext.Result = new HttpStatusCodeResult(HttpStatusCode.BadRequest);
}
If you use an exception filter to handle any exceptions that occur, your controllers are free to concentrate on just being controllers:
[ResponseType(typeof (List<Report>))]
public IHttpActionResult UpdateReports(IEnumerable<Reports> reports)
{
var results = SaveReports(reports).ToList();
return Ok(results);
}
As i understand you basicly looking for Command pattern.
There is a good article that help you to understand one of the ways how you can implement it. I can't say that it's best practice but you can get some ideas from it.
The main idea is to create ICommandHandler and ICommandDispatcher interfaces that help you to place basic logic in one place.

SOLID-principle attempt, solid or not solid?

In our layered architecture I am designing a BLL logic component called AppHandover and have written the basic high level code for this. I want it to follow the SOLID-principles and be loosly coupled, adopt separation of concern and be testable.
Here is what AppHandover should do
Check if User owns app. If not throw an error
remove history if possible (ie no more apps are assigned to user)
transfer the ownership to the next instance
Quesion is, am I on the right track and does the following sample seem SOLID?
public interface ITransferOwnership
{
void TransferOwnership(string userId, string appId, TransferDirection transferDirection);
}
public interface IOwnershipVerification
{
bool UserOwnsApp(string userId, int budgetId, string appId);
}
public interface IPreserveHistoryCheck
{
bool ShouldDeleteTemporaryBudgetData(string userId, int budgetId);
}
public interface IRemoveHistory
{
void DeleteTemporaryBudgetData(string userId, int budgetId);
}
Handover process implementation
public class AppHandoverProcess : KonstruktDbContext, ITransferOwnership
{
private IOwnershipVerification _ownerShipVerification;
private IPreserveHistoryCheck _preserveHistory;
private IRemoveHistory _removeHistory;
private ITransferOwnerShip _transferOwnership;
public AppHandoverProcess()
{
}
public AppHandoverProcess(IOwnershipVerification ownerShipVerification,
IPreserveHistoryCheck preserveHistory,
IRemoveHistory removeHistory)
{
_ownerShipVerification = ownerShipVerification;
_preserveHistory = preserveHistory;
_removeHistory = removeHistory;
}
public void PerformAppHandover(string userId, string appId, int budgetId)
{
if (_ownerShipVerification.UserOwnsApp(userId,budgetId,appId)) {
if (_preserveHistory.ShouldDeleteTemporaryBudgetData(userId, budgetId))
{
_removeHistory.DeleteTemporaryBudgetData(userId, budgetId);
}
//handover logic here..
_transferOwnership.TransferOwnership(userId, appId, TransferDirection.Forward);
}
else
{
throw new Exception("AppHandover: User does not own app, data cannot be handed over");
}
}
}
Concerning the code you outlined above I definitely think you're on the right track. I would push the design a little further and define TransferOwnership as an additional interface.
Following this approach your AppHandoverProcess is completely decoupled from it's client and the behaviour will be defined in the service configuration.
Enforcing an isolation for the TransferOwnership will allow you to easily UnitTest any object implementing the interface without the need to mock AppHandoverProcess dependency.
Also any AppHandoverProcess test should be trivial as the only thing you'll need to make sure is the your services are invoke or that the exception is thrown.
Hope this make sense,
Regards.
I would make KonstruktDbContext as an injectable dependency. AppHandoverprocess should not inherit from it as it looks like it is a different responsibility.

Categories

Resources