Graceful exception handling using DDD and Azure - c#

I am developing an MVC 5 web application using Domain Driven Design. My controllers basically make calls to a service layer that either returns data(entities or lists of entities) or performs actions(business processes) depending upon the scenario.
Here is my confusion. I need an effective strategy for logging exceptions that occur for troubleshooting purposes, while either displaying friendly messages to the user or not displaying at all under certain conditions.
For example, let's say some code in the service layer results in a NullReferenceException, I would like to handle this gracefully for the user while logging the exception for troubleshooting. Additionally, let's say an exception occurs in the repository layer such as a connection error while trying to access the database. This would be another scenario where I would like to handle in the same manner.
What is the recommended approach to this situation when you are dealing with DDD? I have my repository -> service layer -> controller -> UI.
My current approach is to create an exception specific to the Repository Layer and one specific to the Service Layer and failures that occur in the Repository Layer would be bubbled up to the Service Layer where the UI could handle according to its discretion.
However, I would like to utilize Azure logging to add the errors to log files for further investigation.
What is the recommended way of handling errors between the various layers?
What is the recommended place for adding logging in this layered scenario?
It seems like it would be bad to put azure logging in the service or repository layers, at least without using a wrapper class?
Is there a global way to handle this without having to account for every exception(a catch all for any exceptions that might fall through the cracks).

There's not really a definitive answer here, but the following is a solution that I have used a few times and it works quite well. (Not only for exception handling, but for all cross cutting concerns).
A possible way is to use the decorator pattern. I have written a post about this which you can check here: http://www.kenneth-truyers.net/2014/06/02/simplify-code-by-using-composition/
I also recommend you check out Greg Young's video on roughly the same subject: http://www.infoq.com/presentations/8-lines-code-refactoring
In order to use the decorator pattern, you could transform your methods that return data and execute business processes into Query and Command handlers. Say you have the following methods:
List<Customer> GetCustomers(string country, string orderBy)
{
...
}
void CreateInvoice(int customerId, decimal amount)
{
...
}
void CreateCustomer(string name, string address)
{
...
}
Now, these methods do not conform to an interface, so you can't extract one. However, you could change them to a query and command pattern:
Interfaces:
interface IQueryHandler
{
TResult Handle(TQuery query);
}
interface ICommandHandler<T>
{
Handle(T command);
}
Now you can change your classes so they implement this interface:
class GetCustomersHandler : IQueryHandler<CustomerQuery, List<Customer>>
{
List<Customer> Handle(CustomerQuery query)
{
// CustomerQuery is a simple message type class which contains country and orderby
// just as in the original method, but now packed up in a 'message'
}
}
class CreateInvoiceHandler : ICommandHandler<CreateInvoice>
{
public void Handle(CreateInvoice command)
{
// CreateInvoice is a simple message type class which contains customerId and amount
// just as in the original method, but now packed up in a 'message'
}
}
When you have this, you can create a logger-class that implements the logging but wraps (decorates) the underlying class:
class QueryExceptionHandler<TQuery, TResult> : IQueryHandler<TQuery, TResult>
{
IQueryHandler<TQuery, TResult> _innerQueryHandler;
public QueryLogHandler(IQueryHandler<TQuery, TResult> innerQueryHandler)
{
_innerQueryHandler = innerQueryHandler;
}
TResult Handle(TQuery query)
{
try
{
var result = _innerQueryHandler.Handle(query);
}
catch(Exception ex)
{
// Deal with exception here
}
}
}
When you want to use this you could instantiate it like this (from the UI code).
IQueryHandler<CustomerQuery, List<Customer>> handler =
new QueryExceptionHandler<CustomerQuery, List<Customer>>(new GetCustomersHandler());
var customers = handler.Handle(new CustomerQuery {Country = "us", OrderBy = "Name"});
Of course, this queryExceptionHandler can be reused for other handlers as well (example):
IQueryHandler<InvoiceQuery, List<Invoice>> handler =
new QueryExceptionHandler<InvoiceQuery, List<Invoice>>(new GetCInvoicesHandler());
var invoices= handler.Handle(new InvoiceQuery {MinAmount= 100});
Now the exception handling is done in one class and all of your other classes don't need to be bother with it. The same idea can be applied to the business actions (command-side).
Aside from that, in this case I just added one layer for exception handling. You could wrap the exception handler inside a logger as well and so build various decorators on top of each other. That way you can create one class for logging, one for exception handling, one for ...
Not only does it allow you to separate that behavior from the actual classes, but it allows you to customize it for every different handler should you wish so (wrap a customer handler with exception and logging and a invoice handler only in a logging handler for example)
Constructing your handlers like in the example above is very cumbersome (especially when you start adding multiple decorators), but it's just to show you how they work together.
It would be a better idea to use dependency injection for that. You could do manual DI, a functional approach (see Greg Young's video) or use a DI-container.
I know it looks like a very complicated sample, but you'll soon notice that once you have the little structure set up, it's actually quite easy to work with. You can refer to my article where you can also see a solution using a DI-container.

Related

Repository Interface with (or without) IProgress

I've got a repository interface (simplified example code):
public interface IPersonRepository
{
Task<PersonDTO> Get();
}
With two implementations.
One for a direct connection to a database:
public SqlPersonRepository : SqlRepository, IPersonRepository
{
public SqlPersonRepository(IDbConnectionProvider dbCon) : base(dbCon) { }
public async Task<PersonDTO> Get()
{
// use dbCon and dapper to get PersonDTO from database
}
}
And another one for remote access via web api:
public ApiPersonRepository : ApiRepository, IPersonRepository
{
public ApiPersonRepository(IApiConnectionProvider apiCon) : base(apiCon) { }
public async Task<PersonDTO> Get()
{
// use apiCon (contains base url and access token) to perform an HTTP GET request
}
}
The interface makes sense here, because the server can use the SqlPersonRepository. And the remote (native) client can use the ApiPersonRepository. And for most all of the the use cases, this is all I need.
However, my application supports an extraction of a subset of data from the server so that the client application can run while offline. In this case, I'm not just grabbing one person, I'm grabbing a large set of data (several to tens of megabytes) which many times will be downloaded over a slow mobile connection. I need to pass in an IProgress implementation so I can report progress.
In those cases, I need an ApiDatabaseRepository that looks like this:
public ApiDatabaseRepository : ApiRepository, IDatabaseRepository
{
public ApiDatabaseRepository(IApiConnectionProvider apiCon) : base(apiCon) { }
public async Task<DatabaseDTO> Get(IProgress<int?> progress)
{
// use apiCon (contains base url and access token) to perform an HTTP GET request
// as data is pulled down, report back a percent downloaded, e.g.
progress.Report(percentDownloaded);
}
}
However the SqlDatabaseRepository does NOT need to use IProgress (even if Dapper COULD report progress against a database query, which I don't think it can). Regardless, I'm not worried about progress when querying the database directly, but I am worried about it when making an API call.
So the easy solution, is that the SqlDatabaseRepository implementation accepts the IProgress parameter, with a default value of null, and then the implementing method just ignores that value.
public SqlDatabaseRepository : SqlRepository, IDatabaseRepository
{
public SqlDatabaseRepository(IDbConnectionProvider dbCon) : base(dbCon) { }
public async Task<DatabaseDTO> Get(IProgress<int?> progress = null)
{
// use dbCon and dapper to get DatabaseDTO from database
// progress is never used
}
}
But that smells funny. And when things smell funny, I wonder if I'm doing something wrong. This method signature would give the indication that progress will be reported, even though it won't.
Is there a design pattern or a different architecture I should be using in this case?
Oversimplifying this, you basically have 2 options: having a consistent interface or not.
There are, of course other design patterns which might work here, (e.g.; some decorators and a factory method), but I believe them to be overkill.
If you stick to the general rule that consistent interface is desired, I think having a "not entirely implemented" callback technique isn't that bad. You could also consider just to implement it - or at least: make it return something which makes sense.
I would definitely avoid a construction with 2 different interfaces of some kind. Although sometimes this is the better option (when checking if something supports something), e.g.; testing if a hardware component is available - I see it as overkill in your scenario. It would also put more logic at the caller side, and unless you want to open a process-dialog screen only in this scenario, I would avoid it.
A last note: there are alternative progress report patterns such as using an event, or, passing in an optional callback method. This latter looks like your solution but is in fact a little different.
Still this faces you with the same issue in the end, but might be worth to consider.
There are many more solutions - but given the context you provided, I am not sure if they apply. And keep in mind, this is highly opinion based.

C# work with database which type is not known in advance

I have an app that collects data and writes it to a database. The database type is not known in advance, it's defined via an .ini file. So I have a method like this, if the database is Firebird SQL:
public bool writeToDB()
{
FbConnection dbConn = new FbConnection(connString);
dbConn.Open();
FbTransaction dbTrans = dbConn.BeginTransaction();
FbCommand writeCmd = new FbCommand(cmdText, dbConn, dbTrans);
/* some stuff */
writeCmd.ExecuteNonQuery();
dbTrans.Commit();
writeCmd.Dispose();
dbConn.Close();
return true;
}
To make the same work for e.g. MS Access database, I only have to replace FbConnection, FbTransaction and FbCommand with OleDbConnection, OleDbTransaction and OleDbCommand respectively.
But I don't want to have a separate identical method for each type of database.
Is it possible to define the database connection / transaction / command type at runtime, after the database type is known?
Thanks
When you're writing code at this level - opening and closing connections, creating and executing commands - there's probably no benefit in trying to make this method or class database-agnostic. This code is about implementation details so it makes sense that it would be specific to an implementation like a particular database.
But I don't want to have a separate identical method for each type of database.
You're almost certainly better off having separate code for separate implementations. If you try to write code that accommodates multiple implementations it will be complicated. Then another implementation (database) comes along which almost but doesn't quite fit the pattern you've created and you have to make it even more complicated to fit that one in.
I don't know the specifics of what you're building, but "what if I need a different database" is usually a "what if" that never happens. When we try to write one piece of code that satisfies requirements we don't actually have, it becomes complex and brittle. Then real requirements come along and they're harder to meet because our code is tied in knots to do things it doesn't really need to do.
That doesn't mean that all of our code should be coupled to a specific implementation, like a database. We just have to find a level of abstraction that's good enough. Does our application need to interact with a database to save and retrieve data? A common abstraction for that is a repository. In C# we could define an interface like this:
public interface IFooRepository
{
Task<Foo> GetFoo(Guid fooId);
Task Save(Foo foo);
}
Then we can create separate implementations for different databases if and when we need them. Code that depends on IFooRepository won't be coupled to any of those implementations, and those implementations won't be coupled to each other.
First (and Second and Third). STOP REINVENTING THE WHEEL.
https://learn.microsoft.com/en-us/ef/core/providers/?tabs=dotnet-core-cli
Alot of code and an alot of better testing has already been done.
Guess what is in that larger list:
FirebirdSql.EntityFrameworkCore.Firebird Firebird 3.0 onwards
EntityFrameworkCore.Jet Microsoft Access files
......
So I'm gonna suggest something in lines with everyone else. BUT also... allows for some reuse.
I am basing this .. on the fact the Entity Framework Core...provides functionality to several RDBMS.
See:
https://learn.microsoft.com/en-us/ef/core/providers/?tabs=dotnet-core-cli
public interface IEmployeeDomainDataLayer
{
Task<Employee> GetSingle(Guid empKey);
Task Save(Employee emp);
}
public abstract class EmployeeEntityFrameworkDomainDataLayerBase : IEmployeeDomainDataLayer
{
/* you'll inject a MyDbContext into this class */
//implement Task<Employee> GetSingle(Guid empKey); /* but also allow this to be overrideable */
//implement Task Save(Employee emp); /* but also allow this to be overrideable */
}
public class EmployeeJetEntityFrameworkDomainDataLayer : EmployeeEntityFrameworkDomainDataLayerBase, IEmployeeDomainDataLayer
{
/* do not do any overriding OR override if you get into a jam */
}
public class EmployeeSqlServerEntityFrameworkDomainDataLayer : EmployeeEntityFrameworkDomainDataLayerBase, IEmployeeDomainDataLayer
{
/* do not do any overriding OR override if you get into a jam */
}
You "code to an interface, not an implementation". Aka, your business layer codes to IEmployeeDomainDataLayer.
This gives you most code in EmployeeEntityFrameworkDomainDataLayerBase. BUT if any of the concretes give you trouble, you have a way to code something up ONLY FOR THAT CONCRETE.
If you want DESIGN TIME "picking of the RDBMS", then you do this:
You inject one of the concretes ( EmployeeJetEntityFrameworkDomainDataLayer OR EmployeeSqlServerEntityFrameworkDomainDataLayer ) into your IOC, based on which backend you want to wire to.
If you want RUN-TIME "picking of the RDMBS", you can define a "factory".
public static class HardCodedEmployeeDomainDataLayerFactory
{
public static IEmployeeDomainDataLayer getAnIEmployeeDomainDataLayer(string key)
{
return new EmployeeJetEntityFrameworkDomainDataLayer();
// OR (based on key)
return new EmployeeSqlServerEntityFrameworkDomainDataLayer();
}
}
The factory above suffers from IOC anemia. Aka, if your concretes need items for their constructors..you have to fudge them.
A better idea of the above is the kissing cousin of "Factory" pattern, called the Strategy Design.
It is a "kinda factory", BUT you inject the possible results of the "factory" in via a constructor. Aka, the "factory" is NOT hard coded...and does NOT suffer from IOC anemia.
See my answer here:
Using a Strategy and Factory Pattern with Dependency Injection

C# how to "register" class "plug-ins" into a service class?

Update#2 as of year 2022
All these years have passed and still no good answer.
Decided to revive this question.
I'm trying to implement something like the idea I'm trying to show with the following diagram (end of the question).
Everything is coded from the abstract class Base till the DoSomething classes.
My "Service" needs to provide to the consumer "actions" of the type "DoSomethings" that the service has "registered", at this point I am seeing my self as repeating (copy/paste) the following logic on the service class:
public async Task<Obj1<XXXX>> DoSomething1(....params....)
{
var action = new DoSomething1(contructParams);
return await action.Go(....params....);
}
I would like to know if there is anyway in C# to "register" all the "DoSomething" I want in a different way? Something more dynamic and less "copy/paste" and at the same time provide me the "intellisense" in my consumer class? Somekind of "injecting" a list of accepted "DoSomething" for that service.
Update#1
After reading the sugestion that PanagiotisKanavos said about MEF and checking other options of IoC, I was not able to find exactly what I am looking for.
My objective is to have my Service1 class (and all similar ones) to behave like a DynamicObject but where the accepted methods are defined on its own constructor (where I specify exactly which DoSomethingX I am offering as a method call.
Example:
I have several actions (DoSomethingX) as "BuyCar", "SellCar", "ChangeOil", "StartEngine", etc....
Now, I want to create a service "CarService" that only should offer the actions "StartEngine" and "SellCar", while I might have other "Services" with other combination of "actions". I want to define this logic inside the constructor of each service. Then, in the consumer class, I just want to do something like:
var myCarService = new CarService(...paramsX...);
var res1 = myCarService.StartEngine(...paramsY...);
var res2 = myCarService.SellCar(...paramsZ...);
And I want to offer intellisense when I use the "CarService"....
In conclusion: The objective is how to "register" in each Service which methods are provided by him, by giving a list of "DoSomethingX", and automatically offer them as a "method"... I hope I was able to explain my objective/wish.
In other words: I just want to be able to say that my class Service1 is "offering" the actions DoSomething1, DoSomething2 and DoSomething3, but with the minimum lines as possible. Somehow the concept of the use of class attributes, where I could do something similar to this:
// THEORETICAL CODE
[RegisterAction(typeOf(DoSomething1))]
[RegisterAction(typeOf(DoSomething2))]
[RegisterAction(typeOf(DoSomething3))]
public class Service1{
// NO NEED OF EXTRA LINES....
}
For me, MEF/MAF are really something you might do last in a problem like this. First step is to work out your design. I would do the following:
Implement the decorator design pattern (or a similar structural pattern of your choice). I pick decorator as that looks like what you are going for by suplimenting certain classes with shared functionality that isn't defined in those clases (ie composition seems prefered in your example as opposed to inheritance). See here http://www.dofactory.com/net/decorator-design-pattern
Validate step 1 POC to work out if it would do what you want if it was added as a separate dll (ie by making a different CSProj baked in at build time).
Evaluate whether MEF or MAF is for right for you (depending on how heavy weight you want to go). Compare those against other techniques like microservices (which would philosophically change your current approach).
Implement your choice of hot swapping (MEF is probably the most logical based on the info you have provided).
You could use Reflection.
In class Service1 define a list of BaseAction types that you want to provide:
List<Type> providedActions = new List<Type>();
providedActions.Add(typeof(DoSomething1));
providedActions.Add(typeof(DoSomething2));
Then you can write a single DoSomething method which selects the correct BaseAction at run-time:
public async Task<Obj1<XXXX>> DoSomething(string actionName, ....params....)
{
Type t = providedActions.Find(x => x.Name == actionName);
if (t != null)
{
var action = (BaseAction)Activator.CreateInstance(t);
return await action.Go(....params....);
}
else
return null;
}
The drawback is that the Client doesn't know the actions provided by the service unless you don't implement an ad-hoc method like:
public List<string> ProvidedActions()
{
List<string> lst = new List<string>();
foreach(Type t in providedActions)
lst.Add(t.Name);
return lst;
}
Maybe RealProxy can help you? If you create ICarService interface which inherits IAction1 and IAction2, you can then create a proxy object which will:
Find all the interfaces ICarService inherits.
Finds realizations of these interfaces (using actions factory or reflection).
Creates action list for the service.
In Invoke method will delegate the call to one of the actions.
This way you will have intellisence as you want, and actions will be building blocks for the services. Some kind of multi-inheritance hack :)
At this point I am really tempted to do the following:
Make my own Class Attribute RegisterAction (just like I wrote on my "Theoretical" example)
Extend the Visual Studio Build Process
Then on my public class LazyProgrammerSolutionTask: Microsoft.Build.Utilities.Task try to find the service classes and identify the RegisterAction attributes.
Then per each one, I will inject using reflection my own method (the one that I am always copying paste)... and of course get the "signature" from the corresponding target "action" class.
In the end, compile everything again.
Then my "next project" that will consume this project (library) will have the intellisence that I am looking for....
One thing, that I am really not sure, it how the "debug" would work on this....
Since this is also still a theoretically (BUT POSSIBLE) solution, I do not have yet a source code to share.
Meanwhile, I will leave this question open for other possible approaches.
I must disclose, I've never attempted anything of sorts so this is a thought experiment. A couple of wild ideas I'd explore here.
extension methods
You could declare and implement all your actions as extension methods against base class. This I believe will cover your intellisense requirements. Then you have each implementation check if it's registered against calling type before proceeding (use attributes, interface hierarchy or other means you prefer). This will get a bit noisy in intellisense as every method will be displayed on base class. And this is where you can potentially opt to filter it down by custom intellisense plugin to filter the list.
custom intellisense plugin
You could write a plugin that would scan current code base (see Roslyn), analyze your current service method registrations (by means of attributes, interfaces or whatever you prefer) and build a list of autocomplete methods that apply in this particular case.
This way you don't have to install any special plugins into your Dev environment and still have everything functional. Custom VS plugin will be there purely for convenience.
If you have a set of actions in your project that you want to invoke, maybe you could look at it from CQS (Command Query Separation) perspective, where you can define a command and a handler from that command that actually performs the action. Then you can use a dispatcher to dispatch a command to a handler in a dynamic way. The code may look similar to:
public class StartEngine
{
public StartEngine(...params...)
{
}
}
public class StartEngineHandler : ICommandHandler<StartEngine>
{
public StartEngineHandler(...params...)
{
}
public async Task Handle(StartEngine command)
{
// Start engine logic
}
}
public class CommandDispatcher : ICommandDispatcher
{
private readonly Container container;
public CommandDispatcher(Container container) => this.container = container;
public async Task Dispatch<T>(T command) =>
await container.GetInstance<ICommandHandler<T>>().Handle(command);
}
// Client code
await dispatcher.Dispatch(new StartEngine(params, to, start, engine));
This two articles will give you more context on the approach: Meanwhile... on the command side of my architecture, Meanwhile... on the query side of my architecture.
There is also a MediatR library that solves similar task that you may want to check.
If the approaches from above does not fit the need and you want to "dynamically" inject actions into your services, Fody can be a good way to implement it. It instruments the assembly during the build after the IL is generated. So you could implement your own weaver to generate methods in the class decorated with your RegisterAction attribute.

Where should I put commonly used data access code with logic not fitting to Repository when using Service classes on top of Repository/UnitOrWork?

In my earlier question I was asking about implementing repository/unit of work pattern for large applications built with an ORM framework like EF.
One followup problem I cannot come through right now is where to put codes containing business logic, but still lower-level enough to be used commonly in many other part of the application.
For example here is a few such method:
Getting all users in one or more roles.
Getting all cities where a user has privileges within an optional
region.
Getting all measure devices of a given device type, within a given
region for which the current user has privileges.
Finding a product by code, checking if it's visible and throwing
exception if not found or not visible.
All of these methods use a UnitOfWork for data access or manipulation, and receive several parameters as in their specification. I think everyone could write a lot more example for such common tasks in a large project. My question is where shall I put tese method implementations? I can see the following options currently.
Option 1: Every method goes to its own service class
public class RegionServices {
// support DI constructor injection
public RegionServices(IUnitOfWork work) {...}
...
public IEnumerable<City> GetCitiesForUser(User user, Region region = null) { ... }
...
}
public class DeviceServices {
// support DI constructor injection
public DeviceServices(IUnitOfWork work) {...}
...
public IEnumerable<Device> GetDevicesForUser(User user, DeviceType type, Region region = null) { ... }
...
}
What I don't like about it is that if a higher-level application service needs to call for example 3 or these methods, then it needs to instantiate 3 services, and if I use DI then I even have to put all 3 into the constructor, easily resulting quite a bit of code smell.
Option 2: Creating some kind of Facade for such common data access
public class DataAccessHelper {
// support DI constructor injection
public DataAccessHelper(IUnitOfWork work) {...}
...
public IEnumerable<City> GetCitiesForUser(User user, Region region = null) { ... }
public IEnumerable<Device> GetDevicesForUser(User user, DeviceType type, Region region = null) { ... }
public IEnumerable<User> GetUsersInRoles(params string[] roleIds) { ... }
...
}
I don't like it because it feels like violating the SRP, but its usage can be much more comfortable however.
Option 3: Creating extension methods for the Repositories
public static class DataAccessExtensions {
public static IEnumerable<City> GetCitiesForUser(this IRepository repo, User user, Region region = null) { ... }
}
Here IRepository is an interface with generic methods like Query<T>, Save<T>, etc. I don't like it either because it feels like I want to give business logic to repositories which is not advisable AFAIK. However, it expresses that these methods are common and lower level than service classes, which I like.
Maybe there are other options as well?... Thank you for the help.
If you say that a certain piece of domain logic needs to look at 3 distinct pieces of information in order to make a decision then we will need to provide this information to it.
Further if we say that each of these distinct pieces can be useful to other parts of the domain then each of them will need to be in its own method also. We can debate whether each query needs to be housed in a separate class or not depending on your domain/design.
The point I wanted to make is that there will be a application service which delegates to one or more Finder classes (classes where your queries are housed), these classes house only queries and then accumulate the results and pass it down to a Domain Service as method params.
The domain service acts on on the received parameters executes the logic and returns the result. This way the domain service is easily testable.
psuedo code
App Service
result1 = finder.query1()
result2 = finder.query2()
result3= yetanotherfinder.query();
domainresult = domainservice.calculate(result1,result2,result3);
Repositories belong to the domain, queries do not (http://www.jefclaes.be/2014/01/repositories-where-did-we-go-wrong_26.html).
You could define explicit queries and query handlers and use those outside of your domain.
public class GetUserStatisticsQuery
{
public int UserId { get; set; }
}
public class GetUserStatisticsQueryResult
{
...
}
public class GetUserStatisticsQueryHandler :
IHandleQuery<GetUserStatisticsQuery, GetUserStatisticsQueryResult>
{
public GetUserStatisticsQueryResult Handle(GetUserStatisticsQuery query)
{
... "SELECT * FROM x" ...
}
}
var result = _queryExecutor.Execute<GetUserStatisticsQueryResult>(
new GetUserStatisticsQuery(1));
I'm adding my conclusion as an answer, because I quickly realized that this question is quite relative and not exact, heavily depends on personal favours or design trends.
The comments and the answers helped me in seeing more clearly how things like this should basically be implemented, thank you for all of your effort.
Conclusion
A "repository" should be responsible clearly only for data persisting. Because it doesn't hold any domain logic, or type specific logc, I think it can be represented and implemented as an IRepository interface with generic methods like Save<T>, Delete<T>, Query<T>, GetByID<T>, etc. Please refer to my previous question mentioned in the beginning of my original post.
On the other hand, I think (at least now with my current project) that introducing new class/classes for each lower-level domain logic (in the most cases some kind of querying logic) task is a bit over-engineered solution, which is not needed for me. I mean I don't want to introduce classes like GetUsersInRoles or GetDevicesInRegionWithType, etc. I feel I would end up with a lot of classes, and a lot of boilerplate code when refering them.
I decided to implement the 3rd option, adding static query functions as extensions to IRepository. It can be nicely separated in a Queries project folder, and structured in several static classes each named after the underlying domain model on which it defines operations. For example I've implemented user related queries as follows: in Queries folder I've created a UserQueries.cs file, in which I have:
public static class UserQueries {
public static IEnumerable<User> GetInRoles(this IRepository repository, params string[] roles)
{
...
}
}
This way I can easily and comfortable access such methods via extensions on every IRepository, the methods are unit-testable and support DI (as they are callable on any IRepository implementation). This technique fits best for my current needs.
It can be refactored even further to make it even cleaner. I could introduce "ghost" sealed classes like UserQueriesWrapper and use it to structure the calling code and this way not put every kind of such extensions to IRepository. I mean something like this:
// technical class, wraps an IRepository dummily forwarding all members to the wrapped object
public class RepositoryWrapper : IRepository
{
internal RepositoryWrapper(IRepository repository) {...}
}
// technical class for holding user related query extensions
public sealed class UserQueriesWrapper : RepositoryWrapper {
internal UserQueriesWrapper(IRepository repository) : base(repository) {...}
}
public static class UserQueries {
public static UserQueriesWrapper Users(this IRepository repository) {
return new UserQueriesWrapper(repository);
}
public static IEnumerable<User> GetInRoles(this UserQueriesWrapper repository, params string[] roles)
{
...
}
}
...
// now I can use it with a nicer and cleaner syntax
var users = _repo.Users().GetInRoles("a", "b");
...
Thank you for the answers and comments again, and please if there is something I didn't notice or any gotcha with this technique, leave a comment here.

CommandHandler decorators dependency

I have an issue where I would like my handler to use data generated from the handlers:
UpdateUserProfileImageCommandHandlerAuthorizeDecorator
UpdateUserProfileImageCommandHandlerUploadDecorator
UpdateUserProfileImageCommandHandler
My problem is both architectural and performance.
UpdateUserCommandHandlerAuthorizeDecorator makes a call to the repository (entityframework) to authorize the user. I have other decorators similar to this that should use and modify entities and send it up the chain.
UpdateUserCommandHandler should just save the user to the database. I currently have to make another repository call and update the entity while I could have worked on the entity from the previous decorator.
My issue is that the command only accepts the user Id and some properties to update. In the case where I get the user entity from the Authorize decorator, how can I still work on that entity up the chain? Is it Ok to add that User property to the command and work on that?
Code:
public class UpdateUserProfileImageCommand : Command
{
public UpdateUserProfileImageCommand(Guid id, Stream image)
{
this.Id = id;
this.Image = image;
}
public Stream Image { get; set; }
public Uri ImageUri { get; set; }
}
public class UpdateUserProfileImageCommandHandlerAuthorizeDecorator : ICommandHandler<UpdateUserProfileImageCommand>
{
public void Handle(UpdateUserProfileImageCommand command)
{
// I would like to use this entity in `UpdateUserProfileImageCommandHandlerUploadDecorator`
var user = userRespository.Find(u => u.UserId == command.Id);
if(userCanModify(user, currentPrincipal))
{
decoratedHandler(command);
}
}
}
public class UpdateUserProfileImageCommandHandlerUploadDecorator : ICommandHandler<UpdateUserProfileImageCommand>
{
public void Handle(UpdateUserProfileImageCommand command)
{
// Instead of asking for this from the repository again, I'd like to reuse the entity from the previous decorator
var user = userRespository.Find(u => u.UserId == command.Id);
fileService.DeleteFile(user.ProfileImageUri);
var command.ImageUri = fileService.Upload(generatedUri, command.Image);
decoratedHandler(command);
}
}
public class UpdateUserProfileImageCommandHandler : ICommandHandler<UpdateUserProfileImageCommand>
{
public void Handle(UpdateUserProfileImageCommand command)
{
// Again I'm asking for the user...
var user = userRespository.Find(u => u.UserId == command.Id);
user.ProfileImageUri = command.ImageUri;
// I actually have this in a PostCommit Decorator.
unitOfWork.Save();
}
}
You should not try to pass on any extra data just for the sake of performance. Besides, usng decorators, you can't change the contract. Instead you should allow that user entity to be cached and this should typically be the responsibility of the repository implementation. With Entity Framework this is actually rather straightforward. You can call DbSet.Find(id) and EF will first look up the entity in the cache. This prevents unneeded round trips to the database. I do this all the time.
So the only thing you have to do is add a Find(key) or GetById method to your repository that maps to EF's Find(key) method and you're done.
Furthermore, I agree with Pete. Decorators should be primarily for cross-cutting concerns. Adding other things in decorators can be okay sometimes, but you seem to split up the core business logic over both the handler and its decorators. Writing the file to disk be longs to the core logic. You might be conserned about adhering to the Single Responsibility, but it seems to me that your splitting a single responsibility out over multiple classes. That doesn't mean that your command handlers should be big. As Pete said, you probably want to extract this to a service and inject this service into the handler.
Validating the authorization is a cross-cutting concern, so having this in a decorator seems okay, but there are a few problems with your current implementation. First of all, doing it like this will cause you to have many non-generic decorators, which leads to a lot of maintenance. Besides, you silently skip the execution if the user is unauthorized which is typically not what you want.
Instead of silently skipping, consider throwing an exception and prevent the user from being able to call this functionality under normal circumstances. This means that if an exception is thrown, there's either a bug in your code, or the user is hacking your system. Silently skipping without throwing an exception can make it much harder to find bugs.
Another thing is that you might want to consider is trying to implement this authorization logic as generic decorator. For instance have a generc authorization decorator or validation decorator. This might not always be possible, but you might be able to mark commands with an attribute. For instance, in the system I'm currently working on we mark our commands like this:
[PermittedRole(Role.LabManagement)]
We have a AuthorizationVerifierCommandHandlerDecorator<TCommand> that checks the attributes of the command being executed and verifies whether the current user is allowed to execute that command.
UPDATE
Here's an example of what I think your UpdateUserProfileImageCommandHandler could look like:
public class UpdateUserProfileImageCommandHandler
: ICommandHandler<UpdateUserProfileImageCommand>
{
private readonly IFileService fileService;
public UpdateUserProfileImageCommandHandler(IFileService fileService)
{
this.fileService = fileService;
}
public void Handle(UpdateUserProfileImageCommand command)
{
var user = userRespository.GetById(command.Id);
this.fileService.DeleteFile(user.ProfileImageUri);
command.ImageUri = this.fileService.Upload(generatedUri, command.Image);
user.ProfileImageUri = command.ImageUri;
}
}
Why do this via decorators in the first place?
Validation
The normal approach is to have clients do any and all validation required before submitting the command. Any command that is created/published/executed should have all (reasonable) validation performed before submitting. I include 'reasonable' because there are some things, like uniqueness, that can't be 100% validated beforehand. Certainly, authorization to perform a command can be done before submitting it.
Split Command Handlers
Having a decorator that handles just a portion of the command handling logic, and then enriches the command object seems like over-engineering to me. IMHO, decorators should be use to extend a given operation with additional functionality, e.g. logging, transactions, or authentication (although like I said, I don't think that applies for decorating command handlers).
It seems that uploading the image, and then assigning the new image URL in the database are the responsibility of one command handler. If you want the details of those two different operations to be abstracted, then inject your handlers with classes that do so, like IUserimageUploader.
Generally
Normally, commands are considered immutable, and should not be changed once created. This is to help enforce that commands should contain up front all the necessary information to complete the operation.
I'm a little late here, but what I do is define a IUserContext class that you can IoC inject. That way you can load the important user data once and then cache it and all other dependencies can use the same instance. You can then have that data expire after so long and it'll take care of itself.

Categories

Resources