First let's define 'UserContext' as being a number of properties required to execute the receiving message in the correct context of the user, so it is a bit more than just a string. In my case this also includes data on which application 'instance' the user was working.
As I see it there are 2 main options to provide a 'UserContext' for a message:
As a Header
As a base class for the message
When using a Header, I need to provide my own serialization, when using a base class, Rebus will solve the serialization for me.
So I spiked using a base class using a little sample program:
public class UserContext
{
public string Name { get; set; }
public int UserId { get; set; }
public Guid AppId { get; set; }
}
public class UserContextMessageBase
{
public UserContext UserContext { get; set; }
}
public class SimpleMessage : UserContextMessageBase
{
public string Data { get; set; }
}
internal class Program
{
private static void Main(string[] args)
{
using (var adapter = new BuiltinContainerAdapter())
using (var timer = new Timer())
{
//adapter.Register(typeof(UserContextHandler));
adapter.Register(typeof(SimpleMessageHandler));
var bus = Configure.With(adapter)
.Transport(t => t.UseMsmqAndGetInputQueueNameFromAppConfig())
.MessageOwnership(d => d.FromRebusConfigurationSection())
//.SpecifyOrderOfHandlers(o => o.First<UserContextHandler>())
.CreateBus()
.Start();
timer.Elapsed += delegate
{
bus.Send(new Messages.SimpleMessage { Data = Guid.NewGuid().ToString() });
};
timer.Interval = 10000;
timer.Start();
Console.WriteLine("Press enter to quit");
Console.ReadLine();
}
}
}
internal class UserContextHandler : IHandleMessages<UserContextMessageBase>
{
protected UserContext _context;
public void Handle(UserContextMessageBase message)
{
var old = Console.ForegroundColor;
if (_context != null)
{
Console.ForegroundColor = ConsoleColor.Red;
Console.WriteLine("Context is already populated");
}
Console.ForegroundColor = ConsoleColor.DarkYellow;
Console.WriteLine("Processing UserContextMessageBase");
// create the correct Context to process the message
_context = message.UserContext;
Console.ForegroundColor = old;
}
}
internal class SimpleMessageHandler : **UserContextHandler**, IHandleMessages<SimpleMessage>
{
public void Handle(SimpleMessage message)
{
// allow to use the _context to process this message
Console.WriteLine("Received SimpleMessage {0}", message.Data);
}
}
But when I run the program, I see that the SimpleMessage is getting processed twice. Is this 'by design' or perhaps a bug?
On the other hand, I can uncomment the registration for the UserContextHandler, and not inherit the SimpleMessageHandler from the UserContextHandler, but then I would have to stuff the UserContext into the MessageContext, and use it as such from the SimpleMessageHandler.
In my opinion, both approaches are valid - personally, I'd lean towards using headers because they're less noisy, and because that's really what they're there for :) but, as you correctly state, that requires that you somehow take care of "serializing" the user context into one or more headers, deserializing it again upon receiving each message.
The header approach could be done pretty elegantly, though, in the MessageSent and MessageContextEstablished events for sending and receiving respectively, staying out of your message handlers, and then the user context could be made available in the message context.
The other approach with using a message base class is definitely valid too, and I can see that you're hit by the fact that the lookup for the incoming message will get a new handler instance for each lookup - therefore, the pipeline will contain two handler instances, and the message will then be dispatched "as much as possible" (i.e. once for each compatible type/supertype) to each handler instance, thus resulting in effectively handling the message twice.
In your case, I suggest you do as you hint at towards the end: Make the UserContextHandler a separate handler that you ensure gets to be first in the pipeline, thus allowing it to stash the user context in MessageContext.GetCurrent().Items for all subsequent handlers to extract.
I'd love to cook an example, though, showing a way to do exactly what you need, but by using headers (possibly in the form of simply a ;-separated list of key-value pairs, or something similar), but I'm afraid I cannot promise that such an example would be available within the next few days.
Let me know if it works out for you :)
Update: I've added a sample to Rebus' sample repo that demonstrates how an ambient user context can be picked up and passed around in a message header, including a few nifties around configuration and DI - it's called UserContextHeaders - check it out :)
Related
Suppose I have a CQRS command that looks like below:
public sealed class DoSomethingCommand : IRequest
{
public Guid Id { get; set; }
public Guid UserId { get; set; }
public string A { get; set; }
public string B { get; set; }
}
That's processed in the following command handler:
public sealed class DoSomethingCommandHandler : IRequestHandler<DoSomethingCommand, Unit>
{
private readonly IAggregateRepository _aggregateRepository;
public DoSomethingCommand(IAggregateRepository aggregateRepository)
{
_aggregateRepository = aggregateRepository;
}
public async Task<Unit> Handle(DoSomethingCommand request, CancellationToken cancellationToken)
{
// Find aggregate from id in request
var id = new AggregateId(request.Id);
var aggregate = await _aggregateRepository.GetById(id);
if (aggregate == null)
{
throw new NotFoundException();
}
// Translate request properties into a value object relevant to the aggregate
var something = new AggregateValueObject(request.A, request.B);
// Get the aggregate to do whatever the command is meant to do and save the changes
aggregate.DoSomething(something);
await _aggregateRepository.Save(aggregate);
return Unit.Value;
}
}
I have a requirement to save auditing information such as the "CreatedByUserID" and "ModifiedByUserID". This is a purely technical concern because none of my business logic is dependent on these fields.
I've found a related question here, where there was a suggestion to raise an event to handle this. This would be a nice way to do it because I'm also persisting changes based on the domain events raised from an aggregate using an approach similar to the one described here.
(TL;DR: Add events into a collection in the aggregate for every action, pass the aggregate to a single Save method in the repository, use pattern matching in that repository method to handle each event type stored in the aggregate to persist the changes)
e.g.
The DoSomething behavior from above would look something like this:
public void DoSomething(AggregateValueObject something)
{
// Business logic here
...
// Add domain event to a collection
RaiseDomainEvent(new DidSomething(/* required information here */));
}
The AggregateRepository would then have methods that looked like this:
public void Save(Aggregate aggregate)
{
var events = aggregate.DequeueAllEvents();
DispatchAllEvents(events);
}
private void DispatchAllEvents(IReadOnlyCollection<IEvent> events)
{
foreach (var #event in events)
{
DispatchEvent((dynamic) #event);
}
}
private void Handle(DidSomething #event)
{
// Persist changes from event
}
As such, adding a RaisedByUserID to each domain event seems like a good way to allow each event handler in the repository to save the "CreatedByUserID" or "ModifiedByUserID". It also seems like good information to have when persisting domain events in general.
My question is related to whether there is an easy to make the UserId from the DoSomethingCommand flow down into the domain event or whether I should even bother doing so.
At the moment, I think there are two ways to do this:
Option 1:
Pass the UserId into every single use case on an aggregate, so it can be passed into the domain event.
e.g.
The DoSomething method from above would change like so:
public void DoSomething(AggregateValueObject something, Guid userId)
{
// Business logic here
...
// Add domain event to a collection
RaiseDomainEvent(new DidSomething(/* required information here */, userId));
}
The disadvantage to this method is that the user ID really has nothing to do with the domain, yet it needs to be passed into every single use case on every single aggregate that needs the auditing fields.
Option 2:
Pass the UserId into the repository's Save method instead. This approach would avoid introducing irrelevant details to the domain model, even though the repetition of requiring a userId parameter on all the event handlers and repositories is still there.
e.g.
The AggregateRepository from above would change like so:
public void Save(Aggregate aggregate, Guid userId)
{
var events = aggregate.DequeueAllEvents();
DispatchAllEvents(events, userId);
}
private void DispatchAllEvents(IReadOnlyCollection<IEvent> events, Guid userId)
{
foreach (var #event in events)
{
DispatchEvent((dynamic) #event, Guid userId);
}
}
private void Handle(DidSomething #event, Guid userId)
{
// Persist changes from event and use user ID to update audit fields
}
This makes sense to me as the userId is used for a purely technical concern, but it still has the same repetitiveness as the first option. It also doesn't allow me to encapsulate a "RaisedByUserID" in the immutable domain event objects, which seems like a nice-to-have.
Option 3:
Could there be any better ways of doing this or is the repetition really not that bad?
I considered adding a UserId field to the repository that can be set before any actions, but that seems bug-prone even if it removes all the repetition as it would need to be done in every command handler.
Could there be some magical way to achieve something similar through dependency injection or a decorator?
It will depend on the concrete case. I'll try to explain couple of different problems and their solutions.
You have a system where the auditing information is naturally part of the domain.
Let's take a simple example:
A banking system that makes contracts between the Bank and a Person. The Bank is represented by a BankEmployee. When a Contract is either signed or modified you need to include the information on who did it in the contract.
public class Contract {
public void AddAdditionalClause(BankEmployee employee, Clause clause) {
AddEvent(new AdditionalClauseAdded(employee, clause));
}
}
You have a system where the auditing information is not natural part of the domain.
There are couple of things here that need to be addressed. For example can users only issue commands to your system? Sometimes another system can invoke commands.
Solution: Record all incomming commands and their status after processing: successful, failed, rejected etc.
Include the information of the command issuer.
Record the time when the command occured. You can include the information about the issuer in the command or not.
public interface ICommand {
public Datetime Timestamp { get; private set; }
}
public class CommandIssuer {
public CommandIssuerType Type { get; pivate set; }
public CommandIssuerInfo Issuer {get; private set; }
}
public class CommandContext {
public ICommand cmd { get; private set; }
public CommandIssuer CommandIssuer { get; private set; }
}
public class CommandDispatcher {
public void Dispatch(ICommand cmd, CommandIssuer issuer){
LogCommandStarted(issuer, cmd);
try {
DispatchCommand(cmd);
LogCommandSuccessful(issuer, cmd);
}
catch(Exception ex){
LogCommandFailed(issuer, cmd, ex);
}
}
// or
public void Dispatch(CommandContext ctx) {
// rest is the same
}
}
pros: This will remove your domain from the knowlegde that someone issues commands
cons: If you need more detailed information about the changes and match commands to events you will need to match timestamps and other information. Depending on the complexity of the system this may get ugly
Solution: Record all incomming commands in the entity/aggregate with the corresponding events. Check this article for a detailed example. You can include the CommandIssuer in the events.
public class SomethingAggregate {
public void Handle(CommandCtx ctx) {
RecordCommandIssued(ctx);
Process(ctc.cmd);
}
}
You do include some information from the outside to your aggregates, but at least it's abstracted, so the aggregate just records it. It doesn't look so bad, does it?
Solution: Use a saga that will contain all the information about the operation you are using. In a distributed system, most of the time you will need to do this so it whould be a good solution. In another system it will add complexity and an overhead that you maaaay not wan't to have :)
public void DoSomethingSagaCoordinator {
public void Handle(CommandContext cmdCtx) {
var saga = new DoSomethingSaga(cmdCtx);
sagaRepository.Save(saga);
saga.Process();
sagaRepository.Update(saga);
}
}
I've used all methods described here and also a variation of your Option 2. In my version when a request was handled, the Repositoires had access to a context that conained the user info, so when they saved events this information was included in EventRecord object that had both the event data and the user info. It was automated, so the rest of the code was decoupled from it. I did used DI to inject the contex to the repositories. In this case I was just recording the events to an event log. My aggregates were not event sourced.
I use these guidelines to choose an approach:
If its a distributed system -> go for Saga
If it's not:
Do I need to relate detailed information to the command?
Yes: pass Commands and/or CommandIssuer info to aggregates
If no then:
Does the dabase has good transactional support?
Yes: save Commandsand CommandIssueroutside of aggregates.
No: save Commandsand CommandIssuer in aggreages.
While implementing a WPF Application I stumbled on the problem that my application needs some global data in every ViewModel. However some of the ViewModels only need reading access while other need read/write access for this Field. At First I stumbled upon the Microsoft Idea of a SessionContext like so:
public class SessionContext
{
#region Public Members
public static string UserName { get; set; }
public static string Role { get; set; }
public static Teacher CurrentTeacher { get; set; }
public static Parent CurrentParent { get; set; }
public static LocalStudent CurrentStudent { get; set; }
public static List<LocalGrade> CurrentGrades { get; set; }
#endregion
#region Public Methods
public static void Logon(string userName, string role)
{
UserName = userName;
Role = role;
}
public static void Logoff()
{
UserName = "";
Role = "";
CurrentStudent = null;
CurrentTeacher = null;
CurrentParent = null;
}
#endregion
}
This isn't (in my Opinion at least) nicely testable and it gets problematic in case my global data grows (A think that could likely happen in this application).
The next thing I found was the implementation of a Mediator/the Mediator Pattern from this link. I liked the Idea of the Design Norbert is going here and thought about implementing something similar for my project. However in this project I am already using the impressive Mediatr Nuget Package and that is also a Mediator implementation. So I thought "Why reinvent the Wheel" if I could just use a nice and well tested Mediator. But here starts my real Question: In case of sending changes to the global data by other ViewModels to my Readonly ViewModels I would use Notifications. That means:
public class ReadOnlyViewModel : NotificationHandler<Notification>
{
//some Member
//global Data
public string Username {get; private set;}
public async Task Handle(Notification notification, CancellationToken token)
{
Username = notification.Username;
}
}
The Question(s) now:
1. Is this a good Practice for using MVVM (It's just a Feeling that doing this is wrong because it feels like exposing Business Logic in the ViewModel)
2. Is there a better way to seperate this so that my Viewmodel doesn't need to inherit 5 to 6 different NotificationHandlers<,>?
Update:
As Clarification to what I want to achieve here:
My Goal is to implement a wpf application that manages some Global Data (lets say a Username as mentioned above) for one of its Window. That means because i am using a DI Container (and because of what kind of data it is) that I have to declare the Service #mm8 proposed as a Singleton. That however is a little bit problematic in case (and I have that case) I need to open a new Window that needs different global data at this time. That would mean that I either need to change the lifetime to something like "kind of scoped" or (breaking the single Responsibility of the class) by adding more fields for different Purposes or I create n Services for the n possible Windows I maybe need to open. To the first Idea of splitting the Service: I would like to because that would mitigate all the above mentioned problems but that would make the sharing of Data problematic because I don't know a reliable way to communicate this global data from the Writeservice to the readservice while something async or parallell running is happening in a Background Thread that could trigger the writeservice to update it's data.
You could use a shared service that you inject your view models with. It can for example implement two interfaces, one for write operations and one for read operations only, e.g.:
public interface IReadDataService
{
object Read();
}
public interface IWriteDataService : IReadDataService
{
void Write();
}
public class GlobalDataService : IReadDataService, IWriteDataService
{
public object Read()
{
throw new NotImplementedException();
}
public void Write()
{
throw new NotImplementedException();
}
}
You would then inject the view models that should have write access with a IWriteDataService (and the other ones with a IReadDataService):
public ViewModel(IWriteDataService dataService) { ... }
This solution both makes the code easy to understand and easy to test.
There are many other posts regarding the recording of method execution time (for example through postsharp, through action filters, or through a custom method attribute).
So to record the time for a method to complete is relatively straightforward at this point.
What I am looking to do however, is to get more fine-grained performance metrics on a per-request basis utilizing, for example, session id to track all operations that occured for a given request - and the time elapses for all of them, not just the parent (i.e. action controller) method.
For example, I would like to be able to do something like:
namespace MvcApplication1.Controllers
{
public class ProductController : Controller
{
//record start of method
public ActionResult Index()
{
//record start of service1.method call
var data = service1.method();
//store total time of service1.method call
//record start of db call
var objects = db.select(obj).where(id=0)
//store total time of db call
return View();
}
//record total time of method
}
}
Ideally I want to link all of these operations (the parent method, the service call and the db call) together - the most likely candidate would be through the session id - but that means that each call would need access to the session id.
From what I've read, the best way of accomplishing this would be to utilize a method attribute to record the parent performance time, and then some sort of custom library function to store the various timing of the calls (probably using nlog to record).
What I am asking for are opinions on what the best way (if at all possible) to accomplish the above?
Am I missing something with any third party libraries that exist - i.e. does Unity or Postsharp provide this functionality (or some other library)?
Is it possible to link all of these records via the session id? For example, I don't see how to via postsharp (1) store individual method calls INSIDE the MVC action, and (2) to pass variables between calls.
According to your question, you need to log all operations related for a request. I'll provide my point of view, I hope that would be useful.
If you'll use an existing framework or not depends of many reasons, for now I'll focus on and custom implementation.
First, to accomplish this issue you need a log structure:
using System;
public enum LogEntryType
{
Event,
Message,
Warning,
Error
}
public class LogEntry
{
public int? LogEntryID { get; set; }
public int? LogEntryType { get; set; }
public DateTime? EntryDate { get; set; }
public TimeSpan? ElapsedTime { get; set; }
public string Key { get; set; }
public string Description { get; set; }
}
Next, you need to create a logger object and invoke on each point you want to log, for example:
namespace MvcApp.Controllers
{
public class ProductController : Controller
{
protected ILogger Logger;
public ProductController(ILogger logger;)
{
Logger = logger;
}
public ActionResult Index()
{
Logger.Write(LogEntry.Event, Server.SessionID, "Start of '{0}' action call", "Index");
var serviceStopwatch = Stopwatch.StartNew();
Logger.Write(LogEntry.Task, Server.SessionID, "Start of '{0}' task's execution", "GetData");
var data = service.GetData();
serviceStopwatch.Stop();
Logger.Write(LogEntry.Task, Server.SessionID, serviceStopwatch.Elapsed, "End of '{0}' task's execution", "GetData");
var dbCallStopwatch = Stopwatch.StartNew();
Logger.Write(LogEntry.Task, Server.SessionID, "Start of '{0}' db call", "GetObjects");
var objects = repository.GetObjects();
dbCallStopwatch.Stop();
Logger.Write(LogEntry.Task, Server.SessionID, dbCallStopwatch.Elapsed, "End of '{0}' db call", "GetObjects");
Logger.Write(LogEntry.Event, Server.SessionID, "End of '{0}' action call", "Index");
return View();
}
}
}
In the code above, we take the key's value from server's session id (automatic generated) for group all entries.
The Logger.Write method's signatures should be something like these:
public void Write(LogEntryType logEntryType, string key, string message, params string[] args)
{
var item = new LogEntry
{
LogEntryType = (int?)logEntryType,
EntryDate = DateTime.Now,
Key = key,
Description = string.Format(message, args)
};
// Code for save log entry to text file, database, send email if it's an error, etc.
}
public void Write(LogEntryType logEntryType, string key, TimeSpan elapsedTime, string message, params string[] args)
{
var item = new LogEntry
{
LogEntryType = (int?)logEntryType,
EntryDate = DateTime.Now,
ElapsedTime = elapsedTime,
Key = key,
Description = string.Format(message, args)
};
// Code for save log entry to text file, database, send email if it's an error, etc.
}
Usually in real business applications, we need to have workflow definitions for execution metrics and other stuffs, but at this moment I don't know how complex do you want to develop this feature.
If you add all logger's calls in your required point and save all of them into a database (sql or nosql), next you would extract all information about one session id events.
As you can see above, there are some log entry type definitions: warning and errors, suppose that you add try-catch block for error handling, inside of catch block if there is an exception you can log it:
Logger.Write(LogEntry.Error, Server.SessionID, "There was an error on '{0}' task. Details: '{1}'", "Index", ex.Message);
As an additional point, it's better to implement async operations to avoid server blocking for requests.
If this answer makes sense we can improve the concepts, this is a basic idea how you can solve your issue.
I have a nservice bus project which which i will call connector. My connector receives variouis kinds of messages for example ClientChangeMessage, ClientContactChangeMessage. My connector has not implemented saga so i have handler for each message so for ClientChangeMessage I have ClientChangeMessageHandler which gets fired when connector receives ClientChangeMessage and a ClientContactChangeMessageHandler when i receive a ClientContactChangeMessage.
Now while looking at the sagas implementation i found myself writing the following code (If the Client contact message comes before the ClientChange message i.e the client does not exist in the database):
public class ClientContactChangeMessageHandler : ClientMessageHandler,
IHandleMessages<ClientContactChangeMessage>,
IAmStartedByMessages<ClientContactChangeMessage>,
IHandleMessages<ClientChangeMessage>
{
[SetterProperty]
public IClientContactChangeDb ClientContactChangeDb{get;set;}
[SetterProperty]
public IBusRefTranslator BusRefTranslator{get;set;}
static ClientContactChangeMessageHandler()
{
Logger = LogManager.GetLogger(typeof (ClientContactChangeMessageHandler));
}
static ILog Logger;
public void Handle(ClientContactChangeMessage message)
{
//Some handling logic
}
public void Handle(ClientChangeMessage message)
{
throw new NotImplementedException();
}
public override void ConfigureHowToFindSaga()
{
ConfigureMapping<ClientContactChangeMessage>(s => s.Id, m => m.Id);
ConfigureMapping<ClientChangeMessage>(s => s.Id, m => m.Id);
// Notice that we have no mappings for the OrderAuthorizationResponseMessage message. This is not needed since the HR
// endpoint will do a Bus.Reply and NServiceBus will then automatically correlate the reply back to
// the originating saga
}
}
public class ClientMessageHandler : BaseMessageHandler
{
}
public class BaseMessageHandler : Saga<MySagaData>
{
}
public class MySagaData : IContainSagaData
{
public Guid Id { get; set; }
public string Originator { get; set; }
public string OriginalMessageId { get; set; }
}
As can be seen from the example I now have to implement the handle for the CLientChangeMessage as well , now i have already defined a handler for my ClientChangeMessage ,do i have to handle it again over here because if further on in time the ClientChangeMessage does come i would expect it to be caught and processed by the ClientChangeMessageHandler nad not by this one.
I would like to store a message if and only if i don't find the local reference for the Client in my database . Looking at the examples for saga on the web i dont't see any particular place or condition where this would be handled. I am hoping i would be storing the message inside the ClientContactChange handle method.
Any help would be much appreciated,
Thanks
UPDATE:
It would seem that i did not understand properly how to implement NService Bus Saga. The mistake which i made here according to me was that I considered A Client Contact change to be a single entity i.e independent of the Client Change message. So therefore i think i ma wrong in implementing the Saga just for client contact change . Here is how I had to change my code:
public class ClientSaga : Saga<ClientSagaState>,
IAmStartedByMessages<ClientChangeMessage>,
IAmStartedByMessages<ClientContactChangeMessage>,
IHandleTimeout<ClientSagaState>
{
[SetterProperty]
public IClientContactChangeDb ClientContactChangeDb{get;set;}
[SetterProperty]
public IBusRefTranslator BusRefTranslator{get;set;}
public void Handle(ClientContactChangeMessage message)
{
//Some handling logic
//Check if client is not in database then store the state
this.ClientContactChange=message;
//if client is in the data base then
MarkAsComplete();
}
public void Handle(ClientChangeMessage message)
{
//Update or create the client depending on the situation
//check for dependencies
if(this.ClientContactChange !=null)
{
//Handle the contact change
}
}
public override void ConfigureHowToFindSaga()
{
ConfigureMapping<ClientContactChangeMessage>(s => s.ClientRef, m => m.ClientRef);
ConfigureMapping<ClientChangeMessage>(s => s.ClienttnRef, m => m.Id);
// Notice that we have no mappings for the OrderAuthorizationResponseMessage message. This is not needed since the HR
// endpoint will do a Bus.Reply and NServiceBus will then automatically correlate the reply back to
// the originating saga
}
}
public class ClientSagaState: IContainSagaData
{
//i dont need these three fields
public Guid Id { get; set; }
public string Originator { get; set; }
public string OriginalMessageId { get; set; }
// the fields which i needed
public Guid ClientRef {gee; set;}
public ClientChangeMessage ClientChange {get;set;}
public ClientContactChange ClientContactChange {get;set;}
}
Since both handlers handle the same message type, both will be called. If you like you could specify the order in which they get called using ISpecifyMessageHandlerOrdering. Furthermore, you can short circuit this chain based on a condition which may solve the secondary issue.
If that does not work, you may want to consider versioning the message to support both scenarios in a graceful way.
I'm trying to implement basic auditing for a system where users can login, change their passwords and emails etc.
The functions I want to audit are all in the business layer and I would like to create an Audit object that stores the datetime the function was called including the result.
I recently attended a conference and one of the sessions was on well-crafted web applications and I am trying to implement some of the ideas. Basically I am using an Enum to return the result of the function and use a switch statement to update the UI in that layer. The functions use an early return which doesn't leave any time for creating, setting and saving the audit.
My question is what approaches do others take when auditing business functions and what approach would you take if you had a function like mine (if you say ditch it I'll listen but i'll be grumpy).
The code looks a little like this:
function Login(string username, string password)
{
User user = repo.getUser(username, password);
if (user.failLogic1) { return failLogic1Enum; }
if (user.failLogic2) { return failLogic2Enum; }
if (user.failLogic3) { return failLogic3Enum; }
if (user.failLogic4) { return failLogic4Enum; }
user.AddAudit(new (Audit(AuditTypeEnum LoginSuccess));
user.Save();
return successEnum;
}
I could expand the if statements to create a new audit in each one but then the function starts to get messy. I could do the auditing in the UI layer in the switch statement but that seems wrong.
Is it really bad to stick it all in try catch with a finally and use the finally to create the Audit object and set it's information in there thus solving the early return problem? My impression is that a finally is for cleaning up not auditing.
My name is David, and I'm just trying to be a better code. Thanks.
I can't say I have used it, but this seems like a candidate for Aspect Oriented Programming. Basically, you can inject code in each method call for stuff like logging/auditing/etc in an automated fashion.
Separately, making a try/catch/finally block isn't ideal, but I would run a cost/benefit to see if it is worth it. If you can reasonably refactor the code cheaply so that you don't have to use it, do that. If the cost is exorbitant, I would make the try/finally. I think a lot of people get caught up in the "best solution", but time/money are always constraints, so do what "makes sense".
The issue with an enum is it isn't really extensible. If you add new components later, your Audit framework won't be able to handle the new events.
In our latest system using EF we created a basic POCO for our audit event in the entity namespace:
public class AuditEvent : EntityBase
{
public string Event { get; set; }
public virtual AppUser AppUser { get; set; }
public virtual AppUser AdminUser { get; set; }
public string Message{get;set;}
private DateTime _timestamp;
public DateTime Timestamp
{
get { return _timestamp == DateTime.MinValue ? DateTime.UtcNow : _timestamp; }
set { _timestamp = value; }
}
public virtual Company Company { get; set; }
// etc.
}
In our Task layer, we implemented an abstract base AuditEventTask:
internal abstract class AuditEventTask<TEntity>
{
internal readonly AuditEvent AuditEvent;
internal AuditEventTask()
{
AuditEvent = InitializeAuditEvent();
}
internal void Add(UnitOfWork unitOfWork)
{
if (unitOfWork == null)
{
throw new ArgumentNullException(Resources.UnitOfWorkRequired_Message);
}
new AuditEventRepository(unitOfWork).Add(AuditEvent);
}
private AuditEvent InitializeAuditEvent()
{
return new AuditEvent {Event = SetEvent(), Timestamp = DateTime.UtcNow};
}
internal abstract void Log(UnitOfWork unitOfWork, TEntity entity, string appUserName, string adminUserName);
protected abstract string SetEvent();
}
Log must be implemented to record the data associated with the event, and SetEvent is implemented to force the derived task to set it's event's type implicitly:
internal class EmailAuditEventTask : AuditEventTask<Email>
{
internal override void Log(UnitOfWork unitOfWork, Email email, string appUserName, string adminUserName)
{
AppUser appUser = new AppUserRepository(unitOfWork).Find(au => au.Email.Equals(appUserName, StringComparison.OrdinalIgnoreCase));
AuditEvent.AppUser = appUser;
AuditEvent.Company = appUser.Company;
AuditEvent.Message = email.EmailType;
Add(unitOfWork);
}
protected override string SetEvent()
{
return AuditEvent.SendEmail;
}
}
The hiccup here is the internal base task - the base task COULD be public so that later additions to the Task namespace could use it - but overall I think that gives you the idea.
When it comes to implementation, our other tasks determine when logging should occur, so in your case:
AuditEventTask task;
if (user.failLogic1) { task = new FailLogin1AuditEventTask(fail 1 params); }
if (user.failLogic2) { task = new FailLogin2AuditEventTask(fail 2 params); }
if (user.failLogic3) { task = new FailLogin3AuditEventTask(etc); }
if (user.failLogic4) { task = new FailLogin4AuditEventTask(etc); }
task.Log();
user.Save();