Separating the Domain Model and the Data Model - c#

My question is similar to this one: Repository pattern and mapping between domain models and Entity Framework.
I have done a lot of reading on here about the following:
Mapping the ORM directly to the domain model
Mapping the ORM to a data model and then mapping the data model to a domain model (and vice versa)
I understand the benefits and limitations of both approaches. I also understand the scenarios where one approach is favoured over the other.
There are plenty of examples online, which show how to do option 1. However, I cannot find any example code, which shows how to do option 2. I read questions on here about option two like the one referenced on the first line of this post i.e. the question is about option two but the answer is about option one - and there are comments that state that option two may be more appropriate.
Therefore my question is specifically about how to do option one from a mapping and validation perspective:
Mapping
I believe I can do this when mapping the Domain Model to the Data Model:
public PersonDomain GetById(Guid id)
{
return AutoMapper.Mapper.Map<PersonDomain>(Session.Get<PersonData>(id));
}
I believe I have do this when mapping the Data Model to the Domain Model in the repository (to protect the invariants):
protected PersonDomain ToPersonDomain(PersonData personData)
{
return new PersonDomain(personData.ID, personData.Name, personData.DateOfBirth);
}
Validation
I want to do this in the PersonDomain class:
public class PersonDomain
{
public Guid ID{ get; private set; }
public DateTime DateOfBirth { get; private set; }
public string Name { get; private set; }
public PersonDomain(Guid id, DateTime dateOfBirth, string name)
{
if (id == Guid.Empty())
throw new ArgumentException("Guid cannot be empty");
if (name =="")
throw new ArgumentException("Name cannot be empty");
ID = id;
Name = NAME;
DateOfBirth = dateOfBirth;
}
}
However, every example I find tells me not to put validation in the constructor. One idea I had was to avoid primitive obsession as follows:
public class PersonDomain
{
public ID ID{ get; private set; }
public DateOfBirth DateOfBirth { get; private set; }
public Name Name { get; private set; }
public PersonDomain(ID id, DateOfBirth dateOfBirth, Name name)
{
if (id == null)
throw new ArgumentNullException("ID cannot be null");
if (name ==null)
throw new ArgumentNullException("Name cannot be null");
ID = id;
Name = name;
DateOfBirth = dateOfBirth;
}
}
However, in this case; there is still validation in the constructor.
Questions
My two questions are:
Have I understood the mapping between the Domain Model and Data Model (and vice versa) correctly or is there a more elegant way of approaching this (the mapping between the data model and domain model and vice versa)?
Should I be putting any validation logic in the constructor of the PersonDomain Entity in this case?
Update 27/02/18
This link helped me most: http://www.dataworks.ie/Blog/Item/entity_framework_5_with_automapper_and_repository_pattern

every example I find tells me not to put validation in the constructor.
I think you need to find more examples.
It may help to think about what's going on at a deeper level. Fundamentally, what we are trying to do is ensure that a precondition holds. One way to do this is to verify the precondition "everywhere"; but the DRY principle suggests that we would prefer to capture the precondition at a choke point, and ensure that all code paths that require that precondition must pass through that choke point.
In Java (where DDD began) and C#, we can get the type system to do a lot of the heavy lifting; the type system enforces the guarantee that any use of the type has gone through the constructor, so if we establish in the constructor that the precondition holds, we're good to go.
The key idea here isn't "constructor", but "choke point"; using a named constructor, or a factory, can serve just as well.
If your mapping code path passes through the choke point, great.
If it doesn't..., you lose the advantage that the type checking was providing.
One possible answer is to make your domain model more explicit; and acknowledge the existence of unvalidated representations of domain concepts, which can later be explicitly validated.
If you squint, you might recognize this as a way of handling inputs from untrusted sources. We explicitly model untrusted data, and let our mapping code produce it for us, and then within the domain model we arrange for the untrusted data to pass through the choke points, and then do work on the sanitized variants.
Domain Modeling Made Functional covers this idea well; you can get a preview of the main themes by watching Scott Wlaschin's talk Domain Driven Design with the F# type System

1) Have I understood the mapping between the Domain Model and Data
Model (and vice versa) correctly or is there a more elegant way of
approaching this (the mapping between the data model and domain model
and vice versa)?
I would say that the ORM should map the Domain Model (Entity) to the database while you would use a Data Model for representing data to the outside world (UI, REST...).
2) Should I be putting any validation logic in the constructor of the
PersonDomain Entity in this case?
It's ok to put domain validation logic into the domain object constructor. But if you want to do validation that is UI specific it should probably be done in some validation class mapped to the Data Model so that you can return a nice error to the user.

Related

non-persistent properties in domain data model

I hear that for a small project DTO's are not recommended for example here and here. I wonder if it is OK for a considerably small project (team-wise) to merge non-persistent properties in the domain models? eg:
namespace Domain.Entities
{
public class Candidate : BaseEntity
{
public Candidate()
{
// some construction codes
}
// region persistent properties
public string FirstName { get; set; }
public string LastName { get; set; }
public bool? IsMale { get; set; }
public DateTime BirthDate { get; set; }
// other properties ...
// region non-persistent properties
public string FullName => $"{FirstName} {LastName}";
}
}
Is this just keeping simple or am loosing anything valuable this way?
I'm not advocating a particular approach, just sharing information...
I wouldn't put your computation of FullName in a DTO. A DTO is just a simple object, really more of a struct, and shouldn't have any logic in it. The purpose of a DTO is to move data from one layer/tier to another and create a layer of indirection that allows your domain model to evolve independent of your clients. FullName on your Entity as a non-persistent property makes more sense here than in the DTO. If you want to go full enterprise, it would be in a transformer/adapter.
If your project is really small, and is likely never going to grow, then abandoning the DTO can be acceptable. Just keep in mind, that if your project grows you may have to do some refactoring, and there are some other things to consider...
Another benefit of the DTO is keeping some data where it needs to stay. For example, if you have sensitive data in your entity object and you don't put something in place to prevent it from being returned in a web request, you just leaked some information off your app server layer (think the password field in your user entity). A DTO requires you to think about what is being sent to/from the client and makes including data an explicitly intentional act vs an unintentional act. DTOs also make it easier to document what is really required for a client request.
That being said, each DTO is now code you have to write and maintain, which is the main reason to avoid them, and a model change can have a noticeable ripple effect through the system.
It comes down to deciding how you want to handle potential data leakage, how you want to manage your clients (if you can), and how complex your model may get.

Am I supposed to use Objects or ViewModels for MVC or both?

I'm learning MVC. I have an application that I developed using webforms that I'm porting over, but I've hit a bit of a snag.
I'm using the Entity Framework as well.
Currently, my models represent database tables. Generally my controller will grab the data from the table and create a view model that will be passed to the view.
Where I'm a bit confused is when I need to make some transformations based on the model data and pass it to the view I'm not sure where that transformation takes place.
In the webforms application I would have a class where I would create new objects from and then all of the transformations would happen there. A good example would be a User; the database would store first and last name, and the object would have a public property for FullName that would get set in the constructor.
I've read some conflicting discussions on Thin Controllers/Fat Models, and vice-versa. I'm just a little confused. Thin controllers and fat models seems to be the recommended way according to the Microsoft docs, but they don't really give real world examples of doing so.
So with an example:
public class UserEntity
{
public int ID { get; set; }
public string FName { get; set; }
public string LName { get; set; }
}
public class UserController : Controller {
{
protected readonly DBContext _context;
public UserController(DBContext context)
{
_context = context;
}
public IactionResult Index(int id)
{
var _user = _context.Users.Single(u => u.id == id);
UserViewModel _viewModel = new UserViewModel
{
FirstName = _user.FName,
LastName = _user.LName,
FullName = ""
};
return View(_viewModel)
}
}
If the above isn't perfect, forgive me - I just wrote it up for a quick example. It's not intended to be flawless code.
For Fullname, where would I put logic that would give me that information. Now, I realize that in this very simple example, I could easily get the full name right there. But let's just pretend that it's a much more complex operation than concatenating two strings. Where would I place a GetFullName method?
Would I have a method in my model? Would I instead create a User class and pass the returned model data? What about having a separate class library? If either of the latter, would I pass User objects to my view model or would I set view model properties from the User object that was created?
Entity Framework often correlates a representation of the business from a relational data implementation. This approach is ideal for a clean representation of the business model's. But within a web page that direct representation often doesn't translate or play well within the application structure.
They end up usually implementing a pattern called model-view-view-model (MVVM). Basically, a transformation of a single or multiple entities into a single object to be placed within the view as a model. This transformation solves an abundance of issues, example.
public class UserModel
{
private readonly UserEntity user;
public UserModel(UserEntity user) => this.user = user;
public string Name => $"{user.First} {user.Last}";
}
The entity and database reflect a users name separated, first and last. But placing the entity into another structure, allows you to build a representative model to adhere to the view. Obviously a simple example, but the approach is often utilized for a more transparent representation since the view and database may not directly coincide with each other exactly.
So now your controller would do something along these lines.
public class UserController : Controller
{
public IActionResult Index(int id) => View(new UserModel(new UserService().GetUserInformation(id)));
}
I finished answering, what I'm trying to say with an example a comment expresses quite well.
ViewModels are what the name implies. Models for specific views. They
aren't domain entities or DTOs. If a method makes sense for a view's
model, a good place to put it is in the ViewModel. Validations,
notifications, calculated properties etc. are all good candidates. A
mortgage calculator on the other hand would be a bad candidate -
that's a business functionality – Panagiotis Kanavos 7 mins ago

DDD: guidance on updating multiple properties of entities

So, i decided to learn DDD as it seems to solve some architectural problems i have been facing. While there are lots of videos and sample blogs, i have not encountered one that guides me to solve the following scenario:
Suppose i have the entity
public class EventOrganizer : IEntity
{
public Guid Id { get; }
public string Name { get; }
public PhoneNumber PrimaryPhone { get; }
public PhoneNumber AlternatePhone { get; private set; }
public Email Email { get; private set; }
public EventOrganizer(string name, PhoneNumber primaryPhoneNr)
{
#region validations
if (primaryPhoneNr == null) throw new ArgumentNullException(nameof(primaryPhoneNr));
//validates minimum length, nullity and special characters
Validator.AsPersonName(name);
#endregion
Id = new Guid();
Name = name;
PrimaryPhone = primaryPhoneNr;
}
}
My problem is: suppose this will be converted and fed to a MVC view and the user wants to update the AlternatePhone, the Email and a lot of other properties that make sense to exist within this entity for the given bounded context (not shown for brevity)
I understand that the correct guidance is to have a method for each operation, but (AND I KNOW ITS KINDA OF ANTI-PATTERN) i cant help but wonder if this wont end up triggering multiple update calls on the database.
How is this handled ? somewhere down the line, will there be something that maps my EventOrganizer to something - say DbEventOrganizer and gathers all changes made to the domain entity and apply those in a single go?
DDD is better suited for task-based UIs. What you describe is very CRUD-oriented. In your case, individual properties are treated as independent data fields where one or many of these can be updated by a single generic business operation (update).
You will have to perform a deeper analysis of your domain than this if you want to be successfull with DDD.
Why would someone update all those fields together? What implicit business operation is the user trying to achieve by doing that? Is there a more concrete business process that is expressed by changing PrimaryPhone, AlternatePhone and Email together?
Perhaps that is changing the ContactInformation of an EventOrganizer? If that's the case then you could model a single ChangeContactInformation operation on EventOrganizer. Your UI would then send a ChangeContactInformation command rather than an update command.
As for the persistence of your aggregate roots (AR), this is usually handled by an ORM like NHibernate if you are using a RDBMS. However, there are other ways to persist your ARs like Event Sourcing, NoSQL DBs and even storing JSON or any other data inter-change formats in a RDBMS.
You question is quite broad!
EventOrganizer itself should not be updating anything. You should keep your update code quite separate from the entity. A different class would take an EventOrganizer object and update the DB. This is called 'persistence ignorance' and makes the code a lot more modular and cohesive.
It would be common to create a View Model - a class whose purpose is to provide the View with the exact data it needs in the exact form it needs. You would need to create the View Model from your EventOrganizer, after which the View can update it - programmatically or with binding. When you're ready to save the changes, you'll need to update your EventOrganizer from the View Model and pass it onto the updater. This seems like a layer you don't need when the project is small and simple, but it is becomes invaluable as the complexity builds.

Domain Logic leaking into Queries in MVC.NET application using DDD

I am trying to implement a query to fetch some projection of data to MVC view from DB managed by domain model.
I've read that MVC controllers returning static views should request DTOs from Query handlers or so-called read model repositories rather than using aggregate root repositories returning full fledged domain objects. This way we maximize performance (optimizing queries for needed data) and reduce a risk of domain model misuse (we can't accidentally change model with DTOs).
The problem is that some DTO properties can't directly map to DB Table field and may be populated based on some business rule or be a result of some condition that is not implicitly stated in DB. That means that the query acts upon some logic leaking from domain. I heard that it's not right and that queries should directly filter, order, project and aggregate data from DB tables (using linq queries and EF in my case).
I envision 2 solutions so far:
1) Read model repositories internally query full domain model objects, use them to populate DTO properties (importantly those requiring some business logic from them to use). Here we don't gain performance benefits as we act upon instantiated domain models.
2) The other solution is cache all ever required data in DB through probably domain repositories (dealing with aggregate roots) so queries act upon data fields (with cached values) without addressing to domain logic. The consistency of the cached data then will be maintained by domain repositories and that results in some overhead as well.
Examples:
1) business rule can be as simple as string representation of certain objects or data (across the system) i.e. formatting;
2) Business rule can be calculated field returning bool as in the simple domain model below:
// first aggregate root
public class AssignedForm
{
public int Id {get;set}
public string FormName {get;set}
public ICollection<FormRevision> FormRevisions {get;set}
public bool HasTrackingInformation
{
get
{
return FormRevisions.Any(
fr=> fr.RevisionType==ERevisionType.DiffCopy
&& fr.FormRevisionItems.Any)
}
}
public void CreateNextRevision()
{
if(HasTrackingInformation)
{
.......
}
.......
}
}
public enum ERevisionType { FullCopy=0,DiffCopy=1 }
public class FormRevision
{
public int Id {get;set}
public ERevisionType RevisionType {get;set}
public ICollection<FormRevisionItem> FormRevisionItems {get;set}
}
And then we have a read model repository, say IFormTrackingInfoReader returning collection of objects
public class FormTrackingInfo
{
public int AssignedFormId {get;set}
public int AssignedFormName {get;set}
public bool HasTrackingInformation {get;set}
}
The question is how to implement IFormTrackingInfoReader and populate HasTrackingInformation property sticking to DRY principle and without domain leaking into query. I saw people just return domain objects and use mapping to populate view model. Probably this is way to go. Thank you for your help.
I don't like solution 1, the domain model is not persistent ignorant.
Personally, I prefer solution2. But the "ever required data" may be a problem. If new query requirement emerges, perhaps you'll need some data migration(I heard events replaying will do the trick when using event sourcing). So I'm thinking is there a hybrid solution: Use value objects to implement the derivations. And we can create new value object instances with dto.
public class SomeDto {
public String getSomeDerivation() {
return new ValueObject(some data in this dto).someDerivation();
}
}
In this case, I think domain logic is protected and seperated from persistence. But I haven't tried this in real projects.
UPDATE1:
The hybrid solution does not fit your particular FormTrackingInfo case, but your solution two does. One example is (sorry, I'm not a .net guy, in Java)
public class CustomerReadModel {
private String phoneNumber;
private String ....//other properties
public String getPhoneNumber() {
return phoneNumber;
}
public String getAreaCode() {
return new PhoneNumber(this.phoneNumber).getAreaCode();//PhoneNumber is a value object.
}
}
But like I said, I haven't tried it in real projects, I think it's at most an interim solution when the cached data is not ready.

How would you classify this type of design for classes?

The following type of design I have seen basically has "thin" classes, excluding any type of behaviour. A secondary class is used to insert/update/delete/get.
Is this wrong? Is it anti OOP?
User.cs
public class User
{
public string Username { get; set; }
public string Password { get; set; }
}
Users.cs
public class Users
{
public static User LoadUser(int userID)
{
DBProvider db = new DBProvider();
return dp.LoadUser(userID);
}
}
While your user.cs class is lending itself towards a domain transfer object, the Users.cs class is essentially where you can apply business rules within the data-access objects.
You may want to think about the naming convention of your classes along with the namespaces. When I look at a users.cs, I'm assuming that it will essentially be a class for working with a list of users.
Another option would be to look into the Active Record Pattern, which would combine the two classes that you've created.
User.cs
public class User
{
public string Username { get; set; }
public string Password { get; set; }
public User(int userID)
{
//data connection
//get records
this.Username = datarecord["username"];
this.Password = datarecord["password"];
}
}
I would classify it as a domain object or business object. One benefit of this kind of design is that it keeps the model agnostic of any business logic and they can be reused in different kind of environments.
The second class could be classified as a DAO (Data Access Object).
This pattern is not anti-oop at all and is widely used.
I think you're implementing a domain model and a data-access object. It's a good idea.
The first class is anti-OOP because it contains data without behaviour, a typical example of an anemic domain model. It's typical for people who do procedural programming in an OO language.
However, opinions are devided on whether it makes sense ot put DB access logic into the domain model itself (active record pattern) or, as in your code, into a separate class (Data Access Object pattern), since DB access is a separate technical concern that should not necessarily be closely coupled with the domain model.
It looks like it could be the repository pattern this seems to be an increasingly common pattern and is used to great affect in Rob Conery's Storefront example Asp.Net MVC app.
You're basically abstracting your data access code away from the Model, which is a good thing, generally. Though I would hope for a little guts to the model class. Also from previous experience, calling it Users is confusing, UserRepository might be beter. Also might want to consider removing static (which is a hot debate) but makes mocking easier. Plus the repository should be implementing an interface so you can mock it and hence replace it with a fake later.
It's not really object-oriented in any sense, since the object is nothing but a clump of data sticking together. Not that that's a terrible thing.

Categories

Resources