I have an issue where I would like my handler to use data generated from the handlers:
UpdateUserProfileImageCommandHandlerAuthorizeDecorator
UpdateUserProfileImageCommandHandlerUploadDecorator
UpdateUserProfileImageCommandHandler
My problem is both architectural and performance.
UpdateUserCommandHandlerAuthorizeDecorator makes a call to the repository (entityframework) to authorize the user. I have other decorators similar to this that should use and modify entities and send it up the chain.
UpdateUserCommandHandler should just save the user to the database. I currently have to make another repository call and update the entity while I could have worked on the entity from the previous decorator.
My issue is that the command only accepts the user Id and some properties to update. In the case where I get the user entity from the Authorize decorator, how can I still work on that entity up the chain? Is it Ok to add that User property to the command and work on that?
Code:
public class UpdateUserProfileImageCommand : Command
{
public UpdateUserProfileImageCommand(Guid id, Stream image)
{
this.Id = id;
this.Image = image;
}
public Stream Image { get; set; }
public Uri ImageUri { get; set; }
}
public class UpdateUserProfileImageCommandHandlerAuthorizeDecorator : ICommandHandler<UpdateUserProfileImageCommand>
{
public void Handle(UpdateUserProfileImageCommand command)
{
// I would like to use this entity in `UpdateUserProfileImageCommandHandlerUploadDecorator`
var user = userRespository.Find(u => u.UserId == command.Id);
if(userCanModify(user, currentPrincipal))
{
decoratedHandler(command);
}
}
}
public class UpdateUserProfileImageCommandHandlerUploadDecorator : ICommandHandler<UpdateUserProfileImageCommand>
{
public void Handle(UpdateUserProfileImageCommand command)
{
// Instead of asking for this from the repository again, I'd like to reuse the entity from the previous decorator
var user = userRespository.Find(u => u.UserId == command.Id);
fileService.DeleteFile(user.ProfileImageUri);
var command.ImageUri = fileService.Upload(generatedUri, command.Image);
decoratedHandler(command);
}
}
public class UpdateUserProfileImageCommandHandler : ICommandHandler<UpdateUserProfileImageCommand>
{
public void Handle(UpdateUserProfileImageCommand command)
{
// Again I'm asking for the user...
var user = userRespository.Find(u => u.UserId == command.Id);
user.ProfileImageUri = command.ImageUri;
// I actually have this in a PostCommit Decorator.
unitOfWork.Save();
}
}
You should not try to pass on any extra data just for the sake of performance. Besides, usng decorators, you can't change the contract. Instead you should allow that user entity to be cached and this should typically be the responsibility of the repository implementation. With Entity Framework this is actually rather straightforward. You can call DbSet.Find(id) and EF will first look up the entity in the cache. This prevents unneeded round trips to the database. I do this all the time.
So the only thing you have to do is add a Find(key) or GetById method to your repository that maps to EF's Find(key) method and you're done.
Furthermore, I agree with Pete. Decorators should be primarily for cross-cutting concerns. Adding other things in decorators can be okay sometimes, but you seem to split up the core business logic over both the handler and its decorators. Writing the file to disk be longs to the core logic. You might be conserned about adhering to the Single Responsibility, but it seems to me that your splitting a single responsibility out over multiple classes. That doesn't mean that your command handlers should be big. As Pete said, you probably want to extract this to a service and inject this service into the handler.
Validating the authorization is a cross-cutting concern, so having this in a decorator seems okay, but there are a few problems with your current implementation. First of all, doing it like this will cause you to have many non-generic decorators, which leads to a lot of maintenance. Besides, you silently skip the execution if the user is unauthorized which is typically not what you want.
Instead of silently skipping, consider throwing an exception and prevent the user from being able to call this functionality under normal circumstances. This means that if an exception is thrown, there's either a bug in your code, or the user is hacking your system. Silently skipping without throwing an exception can make it much harder to find bugs.
Another thing is that you might want to consider is trying to implement this authorization logic as generic decorator. For instance have a generc authorization decorator or validation decorator. This might not always be possible, but you might be able to mark commands with an attribute. For instance, in the system I'm currently working on we mark our commands like this:
[PermittedRole(Role.LabManagement)]
We have a AuthorizationVerifierCommandHandlerDecorator<TCommand> that checks the attributes of the command being executed and verifies whether the current user is allowed to execute that command.
UPDATE
Here's an example of what I think your UpdateUserProfileImageCommandHandler could look like:
public class UpdateUserProfileImageCommandHandler
: ICommandHandler<UpdateUserProfileImageCommand>
{
private readonly IFileService fileService;
public UpdateUserProfileImageCommandHandler(IFileService fileService)
{
this.fileService = fileService;
}
public void Handle(UpdateUserProfileImageCommand command)
{
var user = userRespository.GetById(command.Id);
this.fileService.DeleteFile(user.ProfileImageUri);
command.ImageUri = this.fileService.Upload(generatedUri, command.Image);
user.ProfileImageUri = command.ImageUri;
}
}
Why do this via decorators in the first place?
Validation
The normal approach is to have clients do any and all validation required before submitting the command. Any command that is created/published/executed should have all (reasonable) validation performed before submitting. I include 'reasonable' because there are some things, like uniqueness, that can't be 100% validated beforehand. Certainly, authorization to perform a command can be done before submitting it.
Split Command Handlers
Having a decorator that handles just a portion of the command handling logic, and then enriches the command object seems like over-engineering to me. IMHO, decorators should be use to extend a given operation with additional functionality, e.g. logging, transactions, or authentication (although like I said, I don't think that applies for decorating command handlers).
It seems that uploading the image, and then assigning the new image URL in the database are the responsibility of one command handler. If you want the details of those two different operations to be abstracted, then inject your handlers with classes that do so, like IUserimageUploader.
Generally
Normally, commands are considered immutable, and should not be changed once created. This is to help enforce that commands should contain up front all the necessary information to complete the operation.
I'm a little late here, but what I do is define a IUserContext class that you can IoC inject. That way you can load the important user data once and then cache it and all other dependencies can use the same instance. You can then have that data expire after so long and it'll take care of itself.
Related
I've got a repository interface (simplified example code):
public interface IPersonRepository
{
Task<PersonDTO> Get();
}
With two implementations.
One for a direct connection to a database:
public SqlPersonRepository : SqlRepository, IPersonRepository
{
public SqlPersonRepository(IDbConnectionProvider dbCon) : base(dbCon) { }
public async Task<PersonDTO> Get()
{
// use dbCon and dapper to get PersonDTO from database
}
}
And another one for remote access via web api:
public ApiPersonRepository : ApiRepository, IPersonRepository
{
public ApiPersonRepository(IApiConnectionProvider apiCon) : base(apiCon) { }
public async Task<PersonDTO> Get()
{
// use apiCon (contains base url and access token) to perform an HTTP GET request
}
}
The interface makes sense here, because the server can use the SqlPersonRepository. And the remote (native) client can use the ApiPersonRepository. And for most all of the the use cases, this is all I need.
However, my application supports an extraction of a subset of data from the server so that the client application can run while offline. In this case, I'm not just grabbing one person, I'm grabbing a large set of data (several to tens of megabytes) which many times will be downloaded over a slow mobile connection. I need to pass in an IProgress implementation so I can report progress.
In those cases, I need an ApiDatabaseRepository that looks like this:
public ApiDatabaseRepository : ApiRepository, IDatabaseRepository
{
public ApiDatabaseRepository(IApiConnectionProvider apiCon) : base(apiCon) { }
public async Task<DatabaseDTO> Get(IProgress<int?> progress)
{
// use apiCon (contains base url and access token) to perform an HTTP GET request
// as data is pulled down, report back a percent downloaded, e.g.
progress.Report(percentDownloaded);
}
}
However the SqlDatabaseRepository does NOT need to use IProgress (even if Dapper COULD report progress against a database query, which I don't think it can). Regardless, I'm not worried about progress when querying the database directly, but I am worried about it when making an API call.
So the easy solution, is that the SqlDatabaseRepository implementation accepts the IProgress parameter, with a default value of null, and then the implementing method just ignores that value.
public SqlDatabaseRepository : SqlRepository, IDatabaseRepository
{
public SqlDatabaseRepository(IDbConnectionProvider dbCon) : base(dbCon) { }
public async Task<DatabaseDTO> Get(IProgress<int?> progress = null)
{
// use dbCon and dapper to get DatabaseDTO from database
// progress is never used
}
}
But that smells funny. And when things smell funny, I wonder if I'm doing something wrong. This method signature would give the indication that progress will be reported, even though it won't.
Is there a design pattern or a different architecture I should be using in this case?
Oversimplifying this, you basically have 2 options: having a consistent interface or not.
There are, of course other design patterns which might work here, (e.g.; some decorators and a factory method), but I believe them to be overkill.
If you stick to the general rule that consistent interface is desired, I think having a "not entirely implemented" callback technique isn't that bad. You could also consider just to implement it - or at least: make it return something which makes sense.
I would definitely avoid a construction with 2 different interfaces of some kind. Although sometimes this is the better option (when checking if something supports something), e.g.; testing if a hardware component is available - I see it as overkill in your scenario. It would also put more logic at the caller side, and unless you want to open a process-dialog screen only in this scenario, I would avoid it.
A last note: there are alternative progress report patterns such as using an event, or, passing in an optional callback method. This latter looks like your solution but is in fact a little different.
Still this faces you with the same issue in the end, but might be worth to consider.
There are many more solutions - but given the context you provided, I am not sure if they apply. And keep in mind, this is highly opinion based.
I am developing an MVC 5 web application using Domain Driven Design. My controllers basically make calls to a service layer that either returns data(entities or lists of entities) or performs actions(business processes) depending upon the scenario.
Here is my confusion. I need an effective strategy for logging exceptions that occur for troubleshooting purposes, while either displaying friendly messages to the user or not displaying at all under certain conditions.
For example, let's say some code in the service layer results in a NullReferenceException, I would like to handle this gracefully for the user while logging the exception for troubleshooting. Additionally, let's say an exception occurs in the repository layer such as a connection error while trying to access the database. This would be another scenario where I would like to handle in the same manner.
What is the recommended approach to this situation when you are dealing with DDD? I have my repository -> service layer -> controller -> UI.
My current approach is to create an exception specific to the Repository Layer and one specific to the Service Layer and failures that occur in the Repository Layer would be bubbled up to the Service Layer where the UI could handle according to its discretion.
However, I would like to utilize Azure logging to add the errors to log files for further investigation.
What is the recommended way of handling errors between the various layers?
What is the recommended place for adding logging in this layered scenario?
It seems like it would be bad to put azure logging in the service or repository layers, at least without using a wrapper class?
Is there a global way to handle this without having to account for every exception(a catch all for any exceptions that might fall through the cracks).
There's not really a definitive answer here, but the following is a solution that I have used a few times and it works quite well. (Not only for exception handling, but for all cross cutting concerns).
A possible way is to use the decorator pattern. I have written a post about this which you can check here: http://www.kenneth-truyers.net/2014/06/02/simplify-code-by-using-composition/
I also recommend you check out Greg Young's video on roughly the same subject: http://www.infoq.com/presentations/8-lines-code-refactoring
In order to use the decorator pattern, you could transform your methods that return data and execute business processes into Query and Command handlers. Say you have the following methods:
List<Customer> GetCustomers(string country, string orderBy)
{
...
}
void CreateInvoice(int customerId, decimal amount)
{
...
}
void CreateCustomer(string name, string address)
{
...
}
Now, these methods do not conform to an interface, so you can't extract one. However, you could change them to a query and command pattern:
Interfaces:
interface IQueryHandler
{
TResult Handle(TQuery query);
}
interface ICommandHandler<T>
{
Handle(T command);
}
Now you can change your classes so they implement this interface:
class GetCustomersHandler : IQueryHandler<CustomerQuery, List<Customer>>
{
List<Customer> Handle(CustomerQuery query)
{
// CustomerQuery is a simple message type class which contains country and orderby
// just as in the original method, but now packed up in a 'message'
}
}
class CreateInvoiceHandler : ICommandHandler<CreateInvoice>
{
public void Handle(CreateInvoice command)
{
// CreateInvoice is a simple message type class which contains customerId and amount
// just as in the original method, but now packed up in a 'message'
}
}
When you have this, you can create a logger-class that implements the logging but wraps (decorates) the underlying class:
class QueryExceptionHandler<TQuery, TResult> : IQueryHandler<TQuery, TResult>
{
IQueryHandler<TQuery, TResult> _innerQueryHandler;
public QueryLogHandler(IQueryHandler<TQuery, TResult> innerQueryHandler)
{
_innerQueryHandler = innerQueryHandler;
}
TResult Handle(TQuery query)
{
try
{
var result = _innerQueryHandler.Handle(query);
}
catch(Exception ex)
{
// Deal with exception here
}
}
}
When you want to use this you could instantiate it like this (from the UI code).
IQueryHandler<CustomerQuery, List<Customer>> handler =
new QueryExceptionHandler<CustomerQuery, List<Customer>>(new GetCustomersHandler());
var customers = handler.Handle(new CustomerQuery {Country = "us", OrderBy = "Name"});
Of course, this queryExceptionHandler can be reused for other handlers as well (example):
IQueryHandler<InvoiceQuery, List<Invoice>> handler =
new QueryExceptionHandler<InvoiceQuery, List<Invoice>>(new GetCInvoicesHandler());
var invoices= handler.Handle(new InvoiceQuery {MinAmount= 100});
Now the exception handling is done in one class and all of your other classes don't need to be bother with it. The same idea can be applied to the business actions (command-side).
Aside from that, in this case I just added one layer for exception handling. You could wrap the exception handler inside a logger as well and so build various decorators on top of each other. That way you can create one class for logging, one for exception handling, one for ...
Not only does it allow you to separate that behavior from the actual classes, but it allows you to customize it for every different handler should you wish so (wrap a customer handler with exception and logging and a invoice handler only in a logging handler for example)
Constructing your handlers like in the example above is very cumbersome (especially when you start adding multiple decorators), but it's just to show you how they work together.
It would be a better idea to use dependency injection for that. You could do manual DI, a functional approach (see Greg Young's video) or use a DI-container.
I know it looks like a very complicated sample, but you'll soon notice that once you have the little structure set up, it's actually quite easy to work with. You can refer to my article where you can also see a solution using a DI-container.
I'm hoping this is straight forward. I'm trying to implement the Command Pattern in my MVC application I'm making. The only issue I'm seeing is that the number of Command objects is really high.
I have 11 tables each with around 20 fields that need to be updated.
My question is do I need to have a different object for each field?
This is a healthcare application, so let me give an example.
I have a table called KeyHospital, this table stores Hospital information and only Hospital information for our Hospital Clients. I've used Linq to SQL for my connection to the database. The KeyHospital table is probably the largest as far as fields go. What I've done is create a new object per field.
public class ChangeHospitalDEA : ICommand
{
public ChangeHospitalDEA(int id, string newDEA)
{
var Thishospital = (from Hospital in _context.Keyhospitals
where Hospital.ID == id
select Hospital).Single();
Thishospital.DEAnum = newDEA;
}
}
I have ICommand as an abstract class.
public abstract class ICommand
{
public AllkeysDataContext _context = new AllkeysDataContext();
public void Execute()
{
_context.SubmitChanges();
}
}
Am I doing this correct? I just feel like I'm writing lots of code for this and it's one of my first times using the Command Pattern.
Considerably too granular. Instead, you should define commands for actions such as inserting a new entity, (graph of one or more related objects) updating an entity, etc.
A command would be used whenever you want to perform an action. In some cases changing a single field may constitute an action, such as returning additional data, or providing suggestions. Such as an AJAX call to validate an entered address. Also, ICommand is a poor name choice for a base abstract class. ICommand is a common interface for command architectures. (the "I" prefix is conventionally reserved for interface names.) I deal predominantly with MVVM but I would suspect that MVC has a common command interface already.
This is might not be what you want to hear but I would restructure your user interface to be Tasks based UI and construct your UI into common Tasks for example Change Drug Enforcement Administration number and these tasks can be refined over time.
If you have existing analytics you will notice that certain field will be only changed together and these could be logically grouped into common tasks
I also feel that is it is a bad practice to hide database calls within constructors and would move that linq statement to the Execute method and have the ctor only initialise public properties or private fields and have them field being used within the execute method.
Major reason for moving the query into the execute method is to reduce the time the chances of any Optimistic concurrency errors.
Also I also feel that calling the Base class ICommand is not a good practice and may lead to confusion in the future and would recommend you calling it CommandBase or changing it to an interface once again.
I have an UnitOfWork attribute, something like this:
public class UnitOfWorkAttribute : ActionFilterAttribute
{
public IDataContext DataContext { get; set; }
public override void OnActionExecuted(ActionExecutedContext filterContext)
{
if (filterContext.Controller.ViewData.ModelState.IsValid)
{
DataContext.SubmitChanges();
}
base.OnActionExecuted(filterContext);
}
}
As you can see, it has DataContext property, which is injected by Castle.Windsor. DataContext has lifestyle of PerWebRequest - meaning single instance reused for each request.
Thing is, that from time to time I get DataContext is Disposed exception in this attribute and I remember that ASP.NET MVC 3 tries to cache action filters somehow, so may that causes the problem?
If it is so, how to solve the issue - by not using any properties and trying to use ServiceLocator inside method?
Is it possible to tell ASP.NET MVC to not cache filter if it does cache it?
I would strongly advice against using such a construct. For a couple of reasons:
It is not the responsibility of the controller (or an on the controller decorated attribute) to commit the data context.
This would lead to lots of duplicated code (you'll have to decorate lots of methods with this attribute).
At that point in the execution (in the OnActionExecuted method) whether it is actually safe to commit the data.
Especially the third point should have drawn your attention. The mere fact that the model is valid, doesn't mean that it is okay to submit the changes of the data context. Look at this example:
[UnitOfWorkAttribute]
public View MoveCustomer(int customerId, Address address)
{
try
{
this.customerService.MoveCustomer(customerId, address);
}
catch { }
return View();
}
Of course this example is a bit naive. You would hardly ever swallow each and every exception, that would just be plain wrong. But what it does show is that it is very well possible for the action method to finish successfully, when the data should not be saved.
But besides this, is committing the transaction really a problem of MVC and if you decide it is, should you still want to decorate all action methods with this attribute. Wouldn't it be nicer if you just implement this without having to do anything on the Controller level? Because, which attributes are you going to add after this? Authorization attributes? Logging attributes? Tracing attributes? Where does it stop?
What you can try instead is to model all business operations that need to run in a transaction, in a way that allows you to dynamically add this behavior, without needing to change any existing code, or adding new attributes all over the place. A way to do this is to define an interface for these business operations. For instance:
public interface ICommandHandler<TCommand>
{
void Handle(TCommand command);
}
Using this interface, your controller would look like this:
private readonly ICommandHandler<MoveCustomerCommand> handler;
// constructor
public CustomerController(
ICommandHandler<MoveCustomerCommand> handler)
{
this.handler = handler;
}
public View MoveCustomer(int customerId, Address address)
{
var command = new MoveCustomerCommand
{
CustomerId = customerId,
Address = address,
};
this.handler.Handle(command);
return View();
}
For each business operation in the system you define a class (a DTO and Parameter Object). In the example the MoveCustomerCommand class. This class contains merely the data. The implementation is defined in a class that implementation of the ICommandHandler<MoveCustomerCommand>. For instance:
public class MoveCustomerCommandHandler
: ICommandHandler<MoveCustomerCommand>
{
private readonly IDataContext context;
public MoveCustomerCommandHandler(IDataContext context)
{
this.context = context;
}
public void Handle(MoveCustomerCommand command)
{
// TODO: Put logic here.
}
}
This looks like an awful lot of extra useless code, but this is actually really useful (and if you look closely, it isn't really that much extra code anyway).
Interesting about this is that you can now define one single decorator that handles the transactions for all command handlers in the system:
public class TransactionalCommandHandlerDecorator<TCommand>
: ICommandHandler<TCommand>
{
private readonly IDataContext context;
private readonly ICommandHandler<TCommand> decoratedHandler;
public TransactionalCommandHandlerDecorator(IDataContext context,
ICommandHandler<TCommand> decoratedHandler)
{
this.context = context;
this.decoratedHandler = decoratedHandler;
}
public void Handle(TCommand command)
{
this.decoratedHandler.Handle(command);
this.context.SubmitChanges();
}
}
This is not much more code than your UnitOfWorkAttribute, but the difference is that this handler can be wrapped around any implementation and injected into any controller, without the controller to know about this. And directly after executing a command is really the only safe place where you actually know whether you can save the changes or not.
You can find more information about this way of designing your application in this article: Meanwhile... on the command side of my architecture
Today I've half accidently found the original issue of the problem.
As it is seen from the question, filter has property, that is injected by Castle.Windsor, so those, who use ASP.NET MVC know, that for that to work you need to have IFilterProvider implementation, which would be able to use IoC container for dependency injection.
So I've started to look at it's implementation, and noticed, that it is derrived from FilterAttributeFilterProvider and FilterAttributeFilterProvider has constructor:
public FilterAttributeFilterProvider(bool cacheAttributeInstances)
So you can control to cache or not your attribute instances.
After disabling this cache, site was blown with NullReferenceExceptions, so I was able to find one more thing, that was overlooked and caused undesired side effects.
Thing was, that original filter was not removed, after we added Castle.Windsor filter provider. So when caching was enabled, IoC filter provider was creating instances and default filter provider was reusing them and all dependency properties were filled with values - that was not clearly noticeable, except the fact, that filters were running twice, after caching was disabled, default provider was needed to create instances by it self, so dependency properties were unfilled, that's why NullRefereceExceptions occurred.
I'm working on my first DDD project, and I think I understand the basic roles of entities, data access objects, and their relationship. I have a basic validation implementation that stores each validation rule with it's associated entity. This works fine for rules that apply to only the current entity, but falls apart when other data is needed. For example, if I have the restriction that a username must be unique, I would like the IsValid() call to return false when there is an existing user with the current name.
However, I'm not finding any clean way to keep this validation rule on the entity itself. I'd like to have an IsNameUnique function on the entity, but most of the solutions to do this would require me to inject a user data access object. Should this logic be in an external service? If so, how do I still keep the logic with the entity itself? Or is this something that should be outside of the user entity?
Thanks!
I like Samuel's response, but for the sake of simplicity, I would recommend implementing a Specification. You create a Specification that returns a boolean to see if an object meets certain criteria. Inject an IUserRepository into the Specification, check if a user already exists with that name, and return a boolean result.
public interface ISpecification<T>
{
bool IsSatisfiedBy(TEntity entity);
}
public class UniqueUsernameSpecification : ISpecification<User>
{
private readonly IUserRepository _userRepository;
public UniqueUsernameSpecification(IUserRepository userRepository)
{
_userRepository = userRepository;
}
public bool IsSatisfiedBy(User user)
{
User foundUser = _userRepository.FindUserByUsername(user.Username);
return foundUser == null;
}
}
//App code
User newUser;
// ... registration stuff...
var userRepository = new UserRepository();
var uniqueUserSpec = new UniqueUsernameSpecification(userRepository);
if (uniqueUserSpec.IsSatisfiedBy(newUser))
{
// proceed
}
In DDD, there ia a concept called aggregate. It is basically responsible for consistency within the application.
IMHO, in this case specifically, I guess the CustomerRepository would be inside something like the "Customer aggregate", being the Customer class the aggregate root.
The root would then be responsible for doing all this stuff, and no one else could access the CustomerRepository options. And there are some possibilities:
The CustomerRepository could throw an exception if the name is not unique (and the Customer would catch and return the error, or something like that)
The CustomerRepository could have an IsValidUserName(), that the Customer would call before doing anything else
Any other option you can think of
I'm going to say that this is simply beyond the scope of DDD.
What you have with DDD is an aggregation of events that produce a useful model. However, data relationships between such models are not necessarily possible.
What consistency model are you using?
If you can commit multiple events in an ACID transaction you can ensure that changes to a group of aggregates happen atomically.
If you use an eventual consistency model you might not be able to actually validate these things until later. And when you do, you might need to compensate for things that have supposedly happened but are no longer valid.
Uniqueness must be answered within a context. If you model is small (in the thousands) you can have an aggregate represent the set of values that you want to be unique. Assuming a group aggregate transaction is possible.
If not, well, then you simply project your model to a database that support an uniqueness constraint. If this projection fails you have to go back to your aggregate and somehow mark it as invalid. All the while you need to consider failure. This is where a distributed long running process, like the saga pattern can be useful but also require that you think about additional things.
In summary, if you cannot use a storage model with a strong consistency guarantee everything becomes a lot more complicated. That said, there are good, well through out patterns for managing failures in a distributed environment but it turns the problem on it's head a bit because now you need to consider failure, at every point along the way, which is a good thing but it will require a bigger time investment.