Best practice with AutoMaping ViewModel (DTO) to Model (Entity) - c#

I've happily been using AutoMapper in a few projects and made use of .ReverseMap() when going from ViewModel to Model. I'd typically do the following:
// run at startup
// I'd customize the mapping if needed
Mapper.CreateMap<Model, ViewModel>().ReverseMap();
[HttpPost]
public ActionResult Create(SomeModel viewModel)
{
var data = Mapper.Map<Model>(viewModel);
_repo.Insert(data);
_uow.Save();
return View();
}
Then I find this article: http://lostechies.com/jimmybogard/2009/09/18/the-case-for-two-way-mapping-in-automapper/
I'm at a loose.
Is the article simply outdated or is there a better way?

A disclaimer first: There are all kinds of different domains and architectures and this answer may not apply to your design goals or architecture at all. You're of course free to use AutoMapper however you want. Also, I did not write the article, so I'm using my own experience with certain types of projects to come up with an answer.
The importance of the domain layer
First of all, the article in question assumes you are using some version of domain driven design. At the very least, I'd say it appeals to the idea of a domain that's a very important part of the project and should be "protected." The bit that best sums this idea up is:
...Because then our mapping layer would influence our domain model.
The author did not want artifacts outside of the domain layer updating the domain.
Why? Because:
The domain model is the heart of the project.
Modification to the domain layer should be a serious operation--the most important parts handled by the domain layer itself.
The article mentions a few problems with allowing the mapping piece of the solution to do domain model updates, including:
Force mutable, public collection , like public EntitySet<Category> Categories { get; } <- NO.
You might wonder why having a mutable, public collection is a bad thing--from a domain model perspective you probably don't want some external service blasting in (possibly invalid) categories whenever it wants.
A more sensible API for adding Categories in this case would be:
have AddCategory and RemoveCategory methods that the entity itself does some validation on before adding.
Expose an IEnumerable<Category> { get; } that could never be modified by outside consumers.
A project I worked on recently had a quite complex domain. Domain entities frequently were only updated after prospective update operations were run through several validation services (living in the domain).
We would have introduced a huge design problem had we allowed mapping back to domain entities.
Allowing AutoMapper (or another mapping project) to map directly to our domain entities would be subverting the logic in the domain entities (or more likely domain services that performed validation) and creating a kind of "back door" into the domain layer.
Alternative
Hopefully the commentary above provided a little help. Unfortunately the alternative when you're not doing automatic mapping is to use plain ol' =. At the very least, though, if you're in a DDD environment you'll be forced to think a little more about what should happen before a domain entity is created or updated.
However
... The .ReverseMap method exists. I think this article still rings true for a certain type of project. The built-in ability to automatically create a two-way mapping means that the library is able to handle applications beyond the target application.
As stated in the disclaimer, two-way mappings might make total sense for your application.

Related

In which layer should live read models (projections) in DDD with Event Sourcing?

In typical DDD architecture we have 3 layers:
Domain - no references
Application - it has reference to Domain layer
Infrastructure - it has reference to Domain layer
(+ Web / UI project)
Domain models live of course in Domain layer. But in which layer should live read models (projections) for read database, for example MongoDb?
Short answer, both Application Services (Application layer) and Repositories (Infrastructure layer) know about the READ models. The domain layer remains transparent to underlying persistence and loading mechanisms.
Long answer, the exact usage mechanism depends on how you use the read models. You could either be using them to construct objects used in your domain layer or more typically, only as responses to API queries.
First case: Use Read Models as objects in the domain layer
The Application service loads the READ model from the repository into the domain entity. It is the repository's responsibility to populate the READ model correctly into the domain entity. The repository is also responsible for transforming the domain entity into the WRITE model to persist in the primary database.
By the time you get to the Domain model, objects are already loaded into memory with the help of repositories. So the domain layer does not even know about the READ model and a WRITE model; it only deals with the domain entity.
Second case: Use Read Models for storing pre-built responses to API queries
This scenario is a more typical use of the READ models. Usually, there are more than one read models for the same Entity/Aggregate because they are custom-built for specific API requests.
In this case, we don't even touch the domain layer. The Application Service accepts the request, uses the READ model repository to load the object, and return a response to the application server.
There's no written law that dictates in which project a read model should live. In my personal opinion, I think having a separate read model project has its benefits. With command query responsibility segregation things tend to get pretty confusing if the command-part of the application can access the query-part of the application. I think the two should be clearly separated.
I've spent some time working on an example project that demonstrates how to set up your DDD/ports-and-adapters/CQRS application. I've dropped the code on GitHub: https://github.com/appie2go/steal-this-code
I've also spent some time to explain the choices I've made in detail in the following articles:
https://medium.com/#abstarreveld/implementing-dddomain-models-ports-adapters-and-cqrs-with-c-2b81403f09f7 and,
https://medium.com/#abstarreveld/dddomain-models-ports-adapters-and-cqrs-reference-architecture-c-504817df65ec
Hope it helps!
Cheers
To be honest, it doesn't really matter. There's no default structure for neither DDD-oriented implementation or event sourcing-oriented implementation.
You can perfectly have a single project if the system is small. If you want to keep your domain clean of external references - you can keep it in a separate project and take care of having zero references except for something you need to support the domain model basis, like entity base class and so on.
Read models and projections are completely orthogonal to the domain model and you usually need them for the query API or query services. You will benefit from keeping read models (documents in case of MongoDB) and projections in one place. You can either reference this project from your API project or keep the query API, query services, query models, read models and projections together.
Again, I would argue that such a thing as a "typical DDD architecture" doesn't exist, because DDD is not architecture to start with. Splitting projects is more a developer convenience and discipline concern and splitting the system is the architectural concern, which is not DDD-specific.
One thing that also comes to my mind is that if you really think DDD, you might first want to find out what is your context map, how many domain models you really need and maybe there you can find some ideas about separation, not based on technical concerns really.

Onion Architecture: Should we allow data annotations in our domain entities?

I am looking to implement the Onion Architecture into our ASP.NET MVC application. I understand the need to separate View Models from Domain Entities, however I am finding myself writing redundant code. There is redundant code because my view models and domain entities look exactly the same with the exception that my view models have the [Serializable] data annotation. I need these models serializable because I am using ASP.NET Session State in which the State Server needs objects to be serializable.
I personally feel the domain entities should NOT be serializable because it would then become dependent on a particular technology. However, how can I avoid redundant code?
I should add that my service methods are dependent on these serializable data models.
I would avoid annotating my domain objects with anything persistence or non-domain related. This way, my Domain project wouldn't depend on another layer and I won't have it cluttered with things that aren't relevant to the Domain. While we need to bend the rules, I prefer bending them in a way not involving dependency on a persistence detail.
The point is to keep the layers focused on their purpose because it's very easy to mix'em up and create (in time) the big ball of mud.
In your case, I have the feeling you don't really have a rich domain or it's improperly modeled. It seems you only have data structures and your needs are CRUDy.
If you are certain the app won't evolve to become more complex i.e it will be just data structure manipulations then you can have one model to use them for all the purposes. Basically you can cut corners and use the 'business' model for everything. No need for abstractions and other stuff.
But if you think the app will evolve or they are or will be business rules and processes i.e you'll need to model behaviour as perceived by the business, then it's best to keep things very decoupled, even if at this stage they seem to be identical.
To make it easier, for your view model you can define one (with copy paste) and use automapper to map the business object to the view model one. Other approach maybe that your query service/repository/object could return directly that view model (map the query result to the view model)
Viewmodels can contain domain entities/models. My domain entities are partial classes and all (eventually) inherit from a base entity which is serialized. Since I use my domain models within some of my viewmodels, I use data annotations on the domain models as well. Your domain model library should not depend on/reference anything else (domain driven).
I wouldn't call [Serializable] a data annotation per se, since it's part of the core .Net platform (mscorlib.dll). Data Annotations refers to specific attributes used to perform data validation and other operations in ASP.Net or Entity Framework.
Should an Entity be [Serializable] ? Probably not, but I don't think this attribute is as "adulterating" to your domain layer as other attributes that come from external, web- or persistence-oriented libraries. It doesn't tie you to a particular third-party framework.
The whole "redundant code" issue depends IMO on the type of system you're building. In a genuine DDD application, duplication between Domain entities and View Models will probably not be all that blatant, and the benefits of a separate presentation model will typically outweigh the costs. A good article on that subject : Is Layering Worth The Mapping ?

In the model layer, is it a good idea to compose types with IDs, as opposed to direct references?

Although this question is in the context of MVVM, I think that it can be generalized to any MV* architecture.
When creating my model layer, I'm used to directly referencing objects to represent relationships, as such:
class Course {
CourseID ID { get; }
string Name { get; }
}
class Student {
IEnumerable<Course> EnrolledCourses { get; }
}
However, I've found that the reconstruction of such object hierarchies from storage is becoming increasingly onerous. Without the benefits of DI at the model layer, I'm left facing the unhappy choice of using a heavy ORM with all the attendant attributes and other headaches, or of using a micro-ORM (my preference) and then painstakingly reconstructing the object graphs by hand.
I'm playing around with the idea of ditching direct references entirely, in favor of something like:
class Course {
CourseID ID { get; }
string Name { get; }
}
class Student {
IEnumerable<CourseID> EnrolledCourses { get; }
}
This way, my model layer starts to more closely resemble relational data, which I've always assumed was traditionally frowned upon (thus the preference for ORMs).
In my code, the app's data is usually exposed to higher levels by the Repository pattern, as reactive feeds and/or iEnumerables. This makes it trivially easy to retrieve and display related data on demand via queries and/or filters by keys. Not quite as easy as a direct reference, but close.
So - what's the main argument AGAINST modeling domain objects without references to other types? Also, I've tried to find discussion about this but haven't seen much, could it be that I'm missing the right search terms?
In any application you want maintainable you need to respect separation of concerns. MV* is always part of the UI layer and most of the time you also have a Business (domain) layer and a Persistence Layer (DAL).
SoC means you get to focus on one layer at the time. The ViewModel is designed for the View, Domain model cares only about business and Persistence Model knows about saving and querying. These are related but are not the same and it's best to think of them as different.
When you're modeling the Domain you don't care about the other layers, you want to best represent the business concepts (and this is very tricky as a lot of them seems easy to model, but it's a trap!) and use cases of those concepts. There aren't any references here, there are only business processes using business concepts i.e you should think at a higher level not at a technical level even if you're writing code.
For example, the code you showed us really looks like a view model, because the Domain concept of Student probably doesn't involve Courses (Your view needs them together). You have Student, Course and these two concepts are working together for a number of business scenarios. It might be better (I don't know exact details for your Domain) to have a service whose purpose is to enroll Students in Course.
From the persistence point of view, you might have something as a repository of enrolled students i.e a collection of CourseId and StudentId. Probably the service I've mentioned above can be implemented directly in the persistence if it doesn't contain business logic. Pretty much all design decisions require the proper understanding of the Domain so I'm just guessing here, but I'm trying to show how to think.
Be aware that for different contexts you really don't need the full object, only its 'short' form which usually means a reference (as in an Id). But from the context point of view, that reference represents a concept. Note that the entities or references are not there for navigational purposes. They are there because they help define a domain concept.
Also, it makes little sense to define a business object with the single purpose to act as a container for others. In domain modelling, the has relationship should be viewed as is defined by.
You should get familiar with CQRS (separating writes from reads). This will make saving/restoring the business model trivial while allowing you to get wild with queries. And you won't be needing an ORM .
Bottom line, you should model things according to the layer(concern) you're working in. Then use mappers to to 'translate' the relevant model from one layer/context to another. Don't try to create the ultimate model which will fit all the purposes.

What is wrong with two-way mapping?

I've been using AutoMapper for a few months now with success but now I've hit a bit of a stumbling block. What I need (or think I need) is for 2 way mapping. This is for when I load an item from the database to play on the screen (domain object -> view model) and when the users make changes to said item and map it back to my domain object (view model -> domain object).
I understand that I could simply create a mapping in my profile to handle the two-way mapping but I was reading here that two-way mapping are unnecessary. Many people indicate that doing so is a response to not fixing the bigger issue (what ever that may be).
I'm just wondering why is this a code smell?
When I was a junior dev, I worked on a large project which basically did what you are describing. We didn't use automapper, but we created viewmodels which encapsulated domain objects which basically meant that you got your changes from the view directly to the domain objects. Just persist your changes and presto they are where you want them to be (in the database). That application should have released four years ago, but they are still struggling.
Why is this a smell? Well you are losing track of any intent as to why you are changing stuff. And intent is something that is really really important as your application grows and becomes more complex. Its difficult to enforce new rules into you domain because it is difficult to see exactly which operations are valid to perform on your domain. If you make your domain model auto-mappable, its also very anemic.
As Jimmy points out, you want to model your domain after the requirements of your domain, not to the requirements of automapper. If automapper were to work directly on your model you would have to make adjustments like making property-setters public, although that might not be a good idea from a domain-modeling perspective.
I think the bigger issue is that a domain model which can be automapped directly from viewmodels does not convey intent, nor encapsulation in a satisfying manner. If you are creating a small app, then active record/dataset style architecture might not be a bad thing, but if the solution is larged scaled or complex you have bigger issues than mapping from viewmodel to domain.
In the current application I am working on, we use automapper to map from domain to dto's and from dto's to viewmodels. When something is to be persisted we translate the operation on the viewmodels into explicit commands which are executed against the domain. I've would never ever recommend a three layered architecture in any large scale app (thin or thick client).

Pattern for retrieving complex object graphs with Repository Pattern with Entity Framework

We have an ASP.NET MVC site that uses Entity Framework abstractions with Repository and UnitOfWork patterns. What I'm wondering is how others have implemented navigation of complex object graphs with these patterns. Let me give an example from one of our controllers:
var model = new EligibilityViewModel
{
Country = person.Pathway.Country.Name,
Pathway = person.Pathway.Name,
Answers = person.Answers.ToList(),
ScoreResult = new ScoreResult(person.Score.Value),
DpaText = person.Pathway.Country.Legal.DPA.Description,
DpaQuestions = person.Pathway.Country.Legal.DPA.Questions,
Terms = person.Pathway.Country.Legal.Terms,
HowHearAboutUsOptions = person.Pathway.Referrers
};
It's a registration process and pretty much everything hangs off the POCO class Person. In this case we're caching the person through the registration process. I've now started implementing the latter part of the registration process which requires access to data deeper in the object graph. Specifically DPA data which hangs off Legal inside Country.
The code above is just mapping out the model information into a simpler format for the ViewModel. My question is do you consider this fairly deep navigation of the graph good practice or would you abstract out the retrieval of the objects further down the graph into repositories?
In my opinion, the important question here is - have you disabled LazyLoading?
If you haven't done anything, then it's on by default.
So when you do Person.Pathway.Country, you will be invoking another call to the database server (unless you're doing eager loading, which i'll speak about in a moment). Given you're using the Repository pattern - this is a big no-no. Controllers should not cause direct calls to the database server.
Once a C ontroller has received the information from the M odel, it should be ready to do projection (if necessary), and pass onto the V iew, not go back to the M odel.
This is why in our implementation (we also use repository, ef4, and unit of work), we disable Lazy Loading, and allow the pass through of the navigational properties via our service layer (a series of "Include" statements, made sweeter by enumerations and extension methods).
We then eager-load these properties as the Controllers require them. But the important thing is, the Controller must explicitly request them.
Which basically tells the UI - "Hey, you're only getting the core information about this entity. If you want anything else, ask for it".
We also have a Service Layer mediating between the controllers and the repository (our repositories return IQueryable<T>). This allows the repository to get out of the business of handling complex associations. The eager loading is done at the service layer (as well as things like paging).
The benefit of the service layer is simple - more loose coupling. The Repository handles only Add, Remove, Find (which returns IQueryable), Unit of Work handles "newing" of DC's, and Commiting of changes, Service layer handles materialization of entities into concrete collections.
It's a nice, 1-1 stack-like approach:
personService.FindSingle(1, "Addresses") // Controller calls service
|
--- Person FindSingle(int id, string[] includes) // Service Interface
|
--- return personRepository.Find().WithIncludes(includes).WithId(id); // Service calls Repository, adds on "filter" extension methods
|
--- IQueryable<T> Find() // Repository
|
-- return db.Persons; // return's IQueryable of Persons (deferred exec)
We haven't got up to the MVC layer yet (we're doing TDD), but a service layer could be another place you could hydrate the core entities into ViewModels. And again - it would be up to the controller to decide how much information it wishes.
Again, it's all about loose coupling. Your controllers should be as simplistic as possible, and not have to worry about complex associations.
In terms of how many Repositories, this is a highly debated topic. Some like to have one per entity (overkill if you ask me), some like to group based on functionality (makes sense in terms of functionality, easier to work with), however we have one per aggregate root.
I can only guess on your Model that "Person" should be the only aggregate root i can see.
Therefore, it doesn't make much sense having another repository to handle "Pathways", when a pathway is always associated with a particular "Person". The Person repository should handle this.
Again - maybe if you screencapped your EDMX, we could give you more tips.
This answer might be extending out a little too far based on the scope of the question, but thought i'd give an in-depth answer, as we are dealing with this exact scenario right now.
HTH.
It depends on how much of the information you're using at any one time.
For example, if you just want to get the country name for a person (person.Pathway.Country.Name) what is the point in hydrating all of the other objects from the database?
When I only need a small part of the data I tend to just pull out what I'm going to use. In other words I will project into an anonymous type (or a specially made concrete type if I must have one).
It's not a good idea to pull out an entire object and everything related to that object every time you want to access some properties. What if you're doing this once every postback, or even multiple times? By doing this you might be making life easier in the short term at the cost of you're making your application less scalable long term.
As I stated at the start though, there isn't a one size fits all rule for this, but I'd say it's rare that you need to hydrate that much information.

Categories

Resources