CQRS with MediatR and re-usability of commands - c#

Does it make sense to create commands that just hold objects? For example:
public class CreateCommand : IRequest
{
SomeDTO SomeDTO { get; set; }
}
public class UpdateCommand : IRequest
{
SomeDTO SomeDTO { get; set; }
}
Or perhaps something like this (deriving):
public class UpdateCommand : SomeDTO, IRequest
{
}
Or commands/requests should be treated as DTOs themselves? I'm confused because I saw many ways of doing things. Also copying all properties to command/request classes doesn't sound like a nice thing to do.
How do you do this in your projects?
Do you map your commands directly to your domain models or you use commands just to pass DTOs?
In case of using MVC framework what should be the input of my controller actions? Should it be a command, or should I create command inside my action implementation and send it? (I guess that will depend on how I model my commands)

Does it make sense to create commands that just hold objects?
No, there is no value added to the extra class: no semantics, no behavior...
Or commands/requests should be treated as DTOs themselves?
Commands (in the CQRS sense of the term) are DTO's by nature. They are dumb data bags that circulate between layers/tiers.
Do you map your commands directly to your domain models
It depends if you favor a task-based UI over a CRUD-based UI. If you do DDD/rich domain model - some would even say basic OO encapsulation - you wouldn't map them. Command names would maybe match entity methods, but their contents are not automatically mapped to domain model fields.
In case of using MVC framework what should be the input of my
controller actions? Should it be a command, or should I create command
inside my action implementation and send it?
I would say both are legit and applicable, except the occasional technical quirk with MVC model binding.

Commands and domain objects, at least in my world, have different design constraints. In particular, commands are part of the API surface - they are part of the contract with other services - and therefore need to have compatible definitions over long periods of time. Domain objects, on the other hand, are local to our current way of doing things - they are part of our organization of data within the black box. So we can change those at any cadence we like.
Commands that cross process boundaries are messages, which is to say byte[]s. That's the bit that needs to be stable, both in form and semantics.
byte[] is domain agnostic, and it's fairly common to pass through several other domain agnostic intermediate stages in "parsing" the message
byte[] -> utf8
utf8 -> DOM
DOM -> Dictionary
...
but we're generally driving toward a domain specific expression of the contract.
See, for instance Mark Seemann
At the boundaries, applications are not object-oriented.
A DTO is a representation of such a piece of data mapped into an object-oriented language.
Having coerced the byte[] into a form that is convenient for querying, then we can start thinking about whether or not we want to use that data to start initializing "objects".
The other question that you may be asking - is there value in a having the message data within a generic metadata "envelope". That kind of pattern occurs all the time - the most familiar example being that an HTTP POST is a bunch of generic headers attached to a message-body.
The data and the metadata are certainly separate concerns; it definitely makes sense to keep them distinct in your solution.
I think compositing the data structures, rather than inheriting them, is going to be the more maintainable option.
public class Envelope<Message> ....
might be a reasonable starting point.

You should treat the command as a "verbal sentence" instructing your domain to do something. For example the "UpdateCommand" instructs your domain to update something. Inside the command you should include the specifics of the command (in your case that dto is fine)...
However be very carefull with those DTO's. You do not want your domain to be dependent on MVC but the other way around. Be sure that the assembly where the dto is living is not of a higher (in the direction of MVC) level than the domain logic.
In your MVC you should have only:
Dependency injection setup
Controllers & Views
Controllers should only contain the code required to transform from the method (http) parameters (witch are unsecure) to the dto required by the domain, and calling the domain.
At least that is the way I'm doing it.

Related

CQRS: Class Redundancy and passing DTO to Domain

My CQRS application has some complex domain objects. When they are created, all properties of the entity are directly specified by the user, so the
CreateFooCommand has about 15 properties.
FooCreatedEvent thus also has 15 properties, because I need all entity properties on the read-side.
As the command parameters must be dispatched to the domain object and FooCreatedCommand should not be passed to the domain,
there is a manual mapping from CreateFooCommand to the domain.
As the domain should create the domain event,
That is another mapping from the domain Foo properties to FooCreatedEvent.
On the read side, I use a DTO to represent the structure of Foo as it is stored within my read-model.
So the event handler updating read-side introduces another mapping from event parameters to DTO.
To implement a simple business case, we have
Two redundant classes
Three mappings of basically the same properties
I thought about getting rid of command/event arguments and push the DTO object around, but that would imply that the domain can receive or create a DTO and assign it to the event.
Sequence:
REST Controller --Command+DTO--> Command Handler --DTO--> Domain --(Event+DTO)--> Event Handler
Any ideas about making CQRS less implementation pain?
I see the following options:
Create a immutable DTO class FooDetails that is used by both CreateFooCommand and FooCreatedEvent by injecting it in the constructor; type hint the aggregate method against FooDetails; for example new CreateFooCommand(new FooDetails(prop1, prop2, ...))
Create a immutable base class FooDetails that is inherited by both the CreateFooCommand and FooCreatedEvent and type hint the aggregate method against FooDetails
Completely change style and use the style promoted by cqrs.nu in which commands are sent directly to the aggregates; the aggregates have command methods like FooAggregate::handle(CreateFooCommand command); I personally use this style a lot.
With CQRS + ES you opted for a more complex approach with more moving parts, knowing that it would allow you to achieve more. Live with it. The strength of this approach implies separating concerns. A Command is a command, an Event is an event, etc. Although many of them may look similar along the chain, there might be exceptions. Some may contain additional data, or slightly different aspects of the same data. A Command can have meta information about the applicative context (who initiated the command, when, is it a retry, etc.) that doesn't concern the Domain. Read models will often include information about related objects to be displayed in addition to their own info (think parent-child relationship).
There's only so much of the seemingly similar code you can cut off before you block yourself from modelling these exceptions. And introducing inheritance or composition between these data structures is often more complex than the original pain of having to write boilerplate mapping code.

How to deal with MVC ViewModel - Domain Model - Entity in MVC controllers and services

We are writing a MVC data maintenance application is part of a larger project. We try to use domain-driven design DDD.
There are already other questions about this on SO, like here, here and here.
Yet they don't fully answer my question.
We also have bounded contexts in the data layer, since the database has 755 tables. So we created bounded contexts for business, roles, products, customers, etc.
The problem we have is that in the MVC application we have a view for "intial setup" which uses a ViewModel that in the end spans multiple bounded contexts (using IUnitOfWork pattern in Entity Framework 6).
That view must therefore write to the business context, and roles context.
The domain model would have a Business model and an Address model and a few other models in a larger pbject graph.
The ViewModel is a flattened, simplified model of these two and other domain models:
public class InitialSetupViewModel
{
string BusinessName{get;set;}
string Street{get;set;}
string Street2{get;set;}
string State{get;set;}
string ZIP{get;set;}
...
}
This ViewModel should map to the domain models, which we are doing with Automapper.
The controller gets the domain service injected:
public class SetupController : Controller
{
private readonly IMaintenanceService service;
public SetupController( IMaintenanceService maintenanceService = null )
{
service = maintenanceService;
}
public void Create(...????....)
{
service.CreateBusiness(..?.);
}
}
Problems:
The service can't know about the InitialSetupViewModel, so what should be passed to the service?
The service must know about the BusinessDbContext and RolesDbContext. So I must call SaveChanges() on both, which beats the purpose of having a single IUnitOfWork. Do I have to create yet another UnitOfWork that includes both business and roles entities?
I don't think it's justifiable to combine these two IUnitOfWorks into one just to make this MVC view work. But what is the solution?
Thank you!
It's always hard to have a strong opinion about a domain you don't know, but here it goes:
As has been commented already, the Controller should assume responsibility for mapping between view and domain models, DTOs or what have you. It could take an instance of InitialSetupViewModel as input, but implementation details may vary.
It is true that remodeling the domain may be the correct choice if you find yourself in need of otherwise breaking the borders of your bounded contexts. Focusing just on the Unit of Work pattern though, I don't quite get your hesitation.
It is the responsibility of a Unit of Work implementation to keep track of all the domain objects that need to be in sync within one transaction. There is nothing odd about the same domain object(s) being involved in several different Units of Work. This does not mean that you should combine "smaller" Unit of Work implementations into "larger" ones when you're dealing with another type of aggregate, but including both "roles" and "businesses" in one Unit of Work, sure.
Doing this should not be enticed by what your view looks like, but by what is "true" in your domain model(s), and rather than dealing just with collections of domain objects, your domain should probably describe suitable aggregates.
Maybe it's even OK to store each domain object with separate transactions (Units of Work), i.e. if synchronizing them is not necessary -- e.g. it's still fine if (or perhaps even desired that) failing to persist Business does not stop Roles from getting through or vice versa. Actually, I think one could even argue that if the bounded contexts are in fact correctly defined, this should be the case.
I hope these comments help.
Martin Fowler on Unit of Work and Aggregates.

Appropriate model & context class separation (Architecture)

I'm currently wondering what's the suggested way to separate plain model classes (for e.g. using them in Entity Framework, Web API, MVC, WCF...) from their application logic parts (server side tasks, threads etc.) utilizing the DRY principe.
Consider this pseduo example:
public class HorseOfDoom {
private Thread _hungerThread;
private Laser _headMountedLaser = new Laser();
public int Age { get; set; }
public string Name { get; set; }
public int Health { get; set; }
public int HungerLevel { get; set; }
public HorseOfDoom() {
_hungerThread.Start();
}
public void PewPew() {
_headMountedLaser.PewPew();
}
}
In this class we have both - model properties that describe the model (age, name,..), but also a thread and methods. I can use this class in Entity Framework, WCF and so on.. but what if I want to use the model in a ASP.NET MVC client application without exposing the methods, threads? Do I have to write the same class again? Do I need managers, adapters and facades? Could I use the buddy class pattern?
Use a model fit for the context. DRY is not about repeating lines of code, it's about repeating behaviour. Your view model can have the same properties (copy paste ftw) as the business model, minus the methods. You can use Automapper to map one to the other. Chances are your view model will have more than only those properties, including validation attributes or other data neede by the view in a certain format.
A model to rule'm all is not good on the long term. Clean models will alow you to focus better on the context and avoid coupling to other contexts, which might use a very similar or identical model. Things change in time and it's easier to work with a specific model from the beginning even if that involves copy paste and it seems that you're repeating yourself.
I understand that a combination as you show it in your sample is not really desirable - my main point of critique would be the thread that already implies a very concrete way on how the object should behave. The probability is high that the thread contained in the class itself will make it harder to use the class in some environments. From my point of view, the platform that integrates the class should be able to choose how to orchestrate the actions of the class - of course the class can make some restrictions like "not to be used in a separate thread as the class is not implemented in a thread-safe way".
As for the point of whether to combine properties and methods in a class: I don't think that there is a clear and always valid answer. It depends very much on how big the architecture of your application is and whether you are willing to pay the price for the separation in terms of complexity and overhead.
The concept of combining properties and methods in a single class is usually referred to as "Domain Model". It is a very natural approach to design complex business logic.
If you have an architecture that sets out to separate the layers very well, you'd have a Domain Model in the business logic that implements the business rules. These classes combine properties and methods, but these classes are mapped to simpler versions (e.g. DTOs) that only transport the data to other layers. This way, you also de-couple a service interface from the domain model and change them with minimal influences on the other layers. For instance, if you have complex classes in the domain model and you want to present only a part of this information in a web interface or through a service layer, you could create one or more DTO classes that contain exactly the data that is needed. Changes to the domain model will not necessarily affect clients so that you gain freedom in this respect.
In a smaller architecture however, you might not need to separate the layers with DTOs if you can live with the consequences.
As for the mentioned example of WCF, you have separate service and data contracts that you typically implement in different classes. If you have additional methods in a class that serves as a data contract those methods will not be part of the data contract. You'd have to explicitly make the methods that you want to publish part of a service contract. If you don't share the classes with a service client (e.g. through a class library), the client will not even know that these methods exist.

What is the best practice to deal with navigation properties when using DTO and POCO objects?

I'm trying to wrap my head around Domain Driven Development. I want to make sure I have a good foundation and understanding of it, so it would be great if recommendations to use AutoMapper or similar are avoided here. My architecture currently involves the following:
The WCF service is responsible for persistence (using Entity Framework) and server-side validation. It converts POCO's to DTO's, and DTO's are transferred to the client.
The Client, receives DTO's and converts them to POCO's. The class that converts POCO's and DTO's is shared between the service and the client.
The POCO's implement IValidatableObject and INotifyPropertyChanged and are used by both the server and the client, but they are not used for data transfer. The DTO's are, which are just property bags containing no behavior.
(1) Question #1. Is this architecture appropriate for a Domain Driven Design.
(2) Question #2. Is it appropriate for POCO's to contain navigation properties? It really feels wrong for POCO's to contain navigation properties in a DDD architecture to me, because it doesn't make sense to me to have a navigation property that may or may not be serialized. It would make more sense to me to have a specialized DTO.
For example, here is a POCO/DTO looks like in my architecture.
// Enforces consistency between a POCO and DTO
public interface IExample
{
Int32 Id { get; set; }
String Name { get; set; }
}
// POCO
public class Example : IExample, INotifyPropertyChanged, IValidatableObject
{
private int id;
private string name;
public Int32 Id {
get { return this.id; }
set {
this.id = value;
OnPropertyChanged("Id");
}
}
public String Name {
get { return this.name; }
set {
this.name = value;
OnPropertyChanged("Name ");
}
}
public ICollection<Example2> ChildExamples {
get { ... }
set { ... }
}
// INotifyPropertyChanged Members
// IValidatableObject Members
}
// DTO
public class ExampleInfo : IExample
{
public Int32 Id { get; set; }
public String Name { get; set; }
public ICollection<Example2Info> ChildExamples { get; set; }
}
It doesn't seem right though, because you may not always need the navigation property, and having an empty (null) object (or collection) seems very wrong in an object-oriented architecture. You also have to deal with serializing and converting deep object hierarchies at times, which is not trivial. It would make more sense for a specialized DTO so there isn't a problem with the constant possibility of empty navigation properties that may or may not need serialized or populated.
public class ComplexInfo
{
public Example ExampleInfo { get; set; }
public ICollection<Example2Info> ChildExamples { get; set; }
}
How are these situations handled in real-world enterprise DDD style architectures and what other advice can be given here?
I agree with Jehof about sending the DTO's to your client and keeping the domain model clean on the server side under your WCF.
With respect to navigation properties, one point Eric Evans emphasizes in Domain Driven Design is to respect invariants. So, in your example above ask yourself if Id and Name are really going to change in the lifetime of the object, or are they invariants? A lot of DDD-style developers would not even put a setter on those properties. Instead build the object's invariant state through a constructor. If Name can change, you probably want a method called Rename(string newName), because there's probably some kind of business rules you'd want to put there anyway.
A red flag in your layers above is that you have your whole object model in the DAL. What you call your assemblies really isn't a big deal but I think it points to your tendency to keep thinking of the application from a data perspective. The point of DDD is to think of your object model in terms of logic and behavior, not data and structure. I (and most other DDD developers, I think) think of the data access layer as Repository classes which return Aggregate Roots. The repositories are responsible for returning your hydrated poco/entity objects from the DAL(repository) to the business layer (and above, such as an application/service layer class or your WCF in your above example). In your case of using EF, you'd have the repositories wrap your DataContext calls and return the entity objects.
I could go on and on, because your question is really targeting the basic fundamentals of DDD, of which there are several. I would recommend 1) Read Eric Evans book, "Domain Driven Design". 2) Keep in mind that DDD targets complex business software. If you're trying to apply it to a simple CRUD application which really is just UI forms and data binding to DB tables, its hard to see a DDD approach take shape, because the problems it addresses just aren't there. So keep that in perspective.
Is this architecture appropriate for a Domain Driven Design?
Not entirely. Take a look at hexagonal architecture for a description of a more modern architectural style which fits nicely with DDD. Within hexagonal, your domain is at the core and various components "attach" to it. For example, a WCF service would be considered an adapter in a hexagonal architecture because it adapts your domain to a communication technology such as TCP or HTTP. Typically, you would have an application service which establishes a facade over your domain and effectively represents use cases. This application service can be referenced by a WCF service to expose functionality over HTTP. Unfortunately, the "service" terminology can be a bit conflating.
Is it appropriate for POCO's to contain navigation properties?
It is appropriate, but the right answer is that it depends. One of the issues with navigational properties that you state is that they may or may not be serialized for a specific DTO. This is telling me that you are talking about queries. Some queries need only a subset of attributes on an aggregate/entity (POCO) and thus the corresponding DTO only has those required properties. It seems wasteful to retrieve an entire entity together with navigational properties. To address this issue you can employ lazy loading. A more salable approach however, is to use read-models for queries. Also, as stated by others, an entity/aggregate certainly can and should contain navigational properties if they are a reflection of the domain. How these "navigational" properties are implemented can vary. Sometimes it can be better to split an aggregate into multiple aggregates. Take a look at Effective Aggregate Design by Vaughn Vernon.
As pointed out by Jehof, you should try to have clients of the WCF service only depend on the contract of that service itself, not on the domain entities (POCOs) that the service encapsulates. Typically, POCOs should not implement INotifyPropertyChanged and IValidatableObject because those interfaces support UI concerns and should be handled by the DTOs or ViewModels.
Domain Driven Design isn't about POCO's or DTO's. It's about Entities, Aggregate Roots, Value Objects. About rich domain objects that can encapsulate behavior in addition to data.
Is it appropriate for POCO's to contain navigation properties ?
It's not clear to me what the POCOs are for in your scenario, but if they are your domain entities, then they can and should certainly contain navigation properties. Actually, using the navigation properties of an Aggregate Root (a special kind of domain entity) is often the only way for external objects to access entities enclosed in that Aggregate. Navigation through association properties is a key concept in DDD.
Also, the recommended architecture in DDD looks more or less like :
Presentation Layer (UI)
Application layer
Domain Layer
Infrastructure layer (includes persistence/DAL)
The key here is the Single Responsibility Principle. You don't want a service that does persistence, server-side validation and DTO mapping at the same time. You need decoupling. You need a clear distribution of responsibilities among your layers so that they are more easily maintainable, extensible and portable.
Another suggestion: think very hard whether to share the mapping code (and by implication the classes they are mapped to) between the client the server.
There is nothing wrong with sharing code, but be careful you are not mixing client concerns and server concerns. It may start with small compromises "I need this property only on the client, but everything else is the same", but you might end up with flags to tell the class whether to use client or server behavior and other nastiness.
Having separate implementations of the POCO's may seem as code duplication at first, but it frees you to have an implementation fitted to the task.
That's why using Automapper and the like makes sense, it lowers the barrier of writing the mapping code.
Another reason to do this (which has been also mentioned) is that the DTO's should be a way to implement a communication API, and not the API itself: i.e. the DTO's are there for WCF to implement a SOAP API (or REST or whatever), but the client should be free to implement the communication layer using only the API specification, without any hidden logic in the mapping code.
This also ensures your API remains language agnostic. You might want to provide client libraries (in any of several appropriate languages) to ease the interaction with your API, but these should not be a requirement.

Creating a large form with multiple dropdowns and text fields in ASP.NET MVC

In my continuing journey through ASP.NET MVC, I am now at the point where I need to render an edit/create form for an entity.
My entity consists of enums and a few other models, created in a repository via LINQtoSQL.
What I am struggling with right now is finding a decent way to render the edit/create forms which will contain a few dropdown lists and a number of text fields. I realize this may not be the most user-friendly approach, but it is what I am going with right now :).
I have a repository layer and a business layer. The controllers interface with the service layer.
Is it best to simply create a viewmodel like so?
public class EventFormViewModel
{
IEventService _eventService;
public IEvent Event { get; private set; }
public IEnumerable<EventCampaign> Campaigns { get; private set; }
public IEnumerable<SelectListItem> Statuses { get; private set; }
// Other tables/dropdowns go here
// Constructor
public EventFormViewModel(IEventService eventService, IEvent ev)
{
_eventService = eventService;
Event = ev;
// Initialize Collections
Campaigns = eventService.getCampaigns().ToSelectList(); //extn method maybe?
Statuses = eventService.getStatus().ToSelectList(); /extn for each table type?
}
So this will give me a new EventFormViewModel which I'll bind to a view. But is this the best way? I'd essentially be pulling all data back from the database for a few different tables and converting them to an IEnumerable. This doesn't seem overly efficient, but I suppose I could cache the contents of the dropdowns.
Also, if all I have is methods that get data for a dropdown, should I just skip the service layer and go right to the repository?
The last part of my question: For the ToSelectList() extension method, would it be possible to write one method for each table and use it generically even if some tables have different columns ("Id" and "Name" versus "Id" and "CampaignName").
Forgive me if this is too general, I'm just trying to avoid going down a dead-end road - or one that will have a lot of potholes.
I wouldn't provide an IEventService for my view model object. I prefer to think of the view model object as a dumb data transfer object. I would let the controller take care of asking the IEventService for the data and passing it on to the view model.
I'd essentially be pulling all data
back from the database for a few
different tables and converting them
to an IEnumerable
I don't see why this would be inefficient? You obviously shouldn't pull all data from the tables. Perform the filtering and joining you need to do in the database as usual. Put the result in the view model.
Also, if all I have is methods that
get data for a dropdown, should I just
skip the service layer and go right to
the repository?
If your application is very simple, then a service layer may be an unneeded layer of abstraction / indirection. But if your application is just a bit complex (from what you've posted above, I would guess that this is the case), consider what you will by taking a shortcut and going straight to a repository and compare this to what you will win in maintainability and testability if you use a service layer.
The worst thing you could do, would be to go through a service layer only when you feel there is a need for it, and go straight to the repository when the service layer will not be providing any extra logic. Whatever you do, be consistent (which almost always means: go through a service layer, even when your application is simple. It won't stay simple).
I would say if you're thinking of "skipping" a layer than you're not really ready to use MVC. The whole point of the layers, even when they're thin, is to facilitate unit testing and try to enforce separation of concerns.
As for generic methods, is there some reason you can just use the OOB objects and then extend them (with extension methods) when they fail to meet your needs?

Categories

Resources