Data Access Layer and DDD - c#

I am trying to learn the idea of Domain Driven Design and trying to figure out where should we put the Database persistence code.
Going through the book "Implementing Domain Driven Design by Vaughn Vernon", I understand that the Repository or the Database operations (along with connections etc.,) have to be kept in the Model project (Domain Model Context) itself. Is my understanding correct?
I am referring to his sample of IdentityAccessContext, AgilePMContext etc., where there are interfaces to the repository. I had always grown with an idea of having a separate Data Layer and adding one here leads me to circular dependency. Because the Interfaces, Entities are declared in Model which is required for the Data layer and the Data layer needs to be referred from model for persistence.
Is this correct or am I missing something?

Going through the book "Implementing Domain Driven Design by Vaughn
Vernon", I understand that the Repository or the Database operations
(along with connections etc.,) have to be kept in the Model project
(Domain Model Context) itself. Is my understanding correct?
Repository abstractions (interfaces) should be in the Domain layer, but their concrete implementations are in an Infrastructure layer.
See his code samples for the book : https://github.com/VaughnVernon/IDDD_Samples/tree/master/iddd_agilepm/src/main/java/com/saasovation/agilepm, for instance TeamRepository interface in domain/model and LevelDBTeamMemberRepository in port/adapter/persistence.
There's no circular reference because persistence is strongly coupled to Domain but Domain is only loosely coupled to persistence if it ever needs it (thanks to Inversion of Control, most of the time achieved by Dependency Injection).

I wouldn't say there's a data access layer per se.
Actually, repositories translate domain into data and viceversa using a data mapper.
Models, services or transaction scripts must use repositories to both get and persist objects. There's no circular reference here:
Service layer injects repository/repositories to implement domain operations. That is, a service will know what to do if it requires objects and it's not focused on how they're translated into data.
Repositories use a data mapper to persist and get data (1).
Data mapper usually is an OR/M like Entity Framework, NHibernate, Dapper...
Furthermore, a good DDD will enforce inversion of control and this will mean that:
Upper layers know about lower layers.
Lower layers don't own any reference to upper layers.
In summary, DDD doesn't have a DAL like you think, but it tries to abstract and encapsulate each concern in order to let upper layers be decoupled from the underlying data approach.
About the circular reference thing...
OP said:
Because the Interfaces, Entities are declared in Model which is
required for the Data layer and the Data layer needs to be referred
from model for persistence.
Double check your statement. If models represent higher layer of abstraction than data, why data should reference models?
It would be the model who requires the data. And the data should be accessed using interfaces to let the models be agnostic about the data access strategy. This is called inversion of control, as I said previously on this answer.
Maybe you think that data layer requires a reference to entities, because you're still thinking the old way, but if you practice a good separation of concerns and you work with a data mapper (or you implement it yourself), the data mapper just maps objects to raw data and viceversa, and it doesn't require a reference to concrete classes like domain objects (you should ask yourself how Entity Framework can persist your entities to your favourite database engine without even knowing about these entities (2)).
Anyway, as I said before on this answer too, you shouldn't think about a DAL but a bunch of repositories and a data mapper, or just repositories (see exception (1)).
Usually dependent lower layers are instantiated and given to upper layers using dependency injection pattern (a possible implementation of inversion of control).
(1) Exception to this rule: if we're no talking about a relational storage, maybe there's no data mapper at all, and repository implements its interface by accessing the data with lower level of abstraction/encapsulation. For example, if the repository implements its operations to store data in XML, then it would use an XDocument or XmlDocument internally and there's no actual data mapper at all.
(2) Actually OR/M frameworks like Entity Framework can know how's your model by configuration. While it doesn't require a reference to your concrete domain to compile, it requires you to provide class maps using Code First or other approaches)

Related

How do I return a domain object from the repository when there is a separate data model?

Update
My research is telling me that I should be using a Data Mapper: https://martinfowler.com/eaaCatalog/dataMapper.html. Are Data Mappers injected into repositories like this: http://www.rantdriven.com/post/2009/09/01/Using-the-Repository-Pattern-with-the-Command-and-DataMapper-Patterns.aspx and this: Repository and Data Mapper pattern or are they used as an alternative to a repository? All the examples I find seem to use Data Mappers to map DataReader objects to lists of domain objects. I am wanting to map Persistent objects to Domain objects.
Original Question
I am trying to build a Domain Model that is completely isolated from the Data Model after reading articles like this:
http://blog.sapiensworks.com/post/2012/04/07/Just-Stop-It!-The-Domain-Model-Is-Not-The-Persistence-Model.aspx. There are only six of us working on this system at the moment, however this could increase in future to 9+. However, I am beginning to think that this is not the right approach. I read a lot of questions on here, which appear to tell me to map the ORM directly to the Domain Model.
I recently asked this question: https://softwareengineering.stackexchange.com/questions/365643/should-the-data-model-be-identical-to-the-domain-model-for-mapping-purposes. One of the answerers says: "I believe the mapping in between both should be within a (persistence-oriented) repository". How do you do this mapping? I don't believe I should be using AutoMapper in the repository because of the reasons stated here: Repository pattern and mapping between domain models and Entity Framework and here: http://enterprisecraftsmanship.com/2016/02/08/specification-pattern-c-implementation/ i.e. I cannot simply do this in the repository:
public Customer getId()
{
CustomerData customerData = customerRepository.getById(id);
return Mapper.Map<CustomerDomain>(customerData);
}
I cannot do this because the invariants of the domain object will not be considered. How can I return a domain object from the repository. Do I inject a factory into the repository, which will take parameters from the data model and return a domain model? Is this even the right approach or is there another pattern for mapping data objects to domain objects?
Quick review of repositories, as defined by Evans
REPOSITORIES (provide) the means of finding and retrieving persistent objects while encapsulating the immense infrastructure involved.
(REPOSITORIES) provide the illusion of an in memory collection...
So, the API interface should normally be expressed in a domain specific vocabulary; neither the application layer nor the domain model should need to know any specifics beyond what is expressed in the API.
Commonly, the immense infrastructure means a domain agnostic persistence appliance; we don't persist domain objects, we persist bytes. With some appliances, we work very closely with the byte representation -- think streaming data to and from files. In other cases, the appliance provides a abstraction of those bytes - an RDBMS gives us an API that understands rows in tables, and encapsulates the details of how the bytes are arranged.
What this means is that somewhere we need a transformation from a domain agnostic representation to a domain specific representation, and vice versa.
Normally, these take the form of functions.
toDomainModel: JsonDocument -> DomainModel
toJson: DomainModel -> JsonDocument
These functions are normally defined by, and invoked by, the repository implementation -- they are part of the immense infrastructure that Evans describes.
I cannot do this because the invariants of the domain object will not be considered.
There are a couple possibilities here.
One, of course, would be to get a smarter mapper.
A second possibility is to model the unvalidated representation of the model as a distinct thing from a validated representation.
Example: consider a model for Money, that requires an Amount and a CurrencyCode; and the semantics of your model requires that Amount be a positive number and CurrencyCode be an entry in a fixed collection of tokens. There's nothing wrong with having an UnvalidatedMoney type, that has no semantic restrictions, and a function that converts UnvalidatedMoney to Money (enforcing the invariant).
This is analogous to what Scott Wlaschin describes for modeling a verified email address.
Note that this isn't necessarily an undue burden; if you have an invariant on some domain concept like money, you probably already have validation to do when passing new input from the world into your model. So the work is mostly about making that validation element re-usable.
A third possibility is to look into the Builder pattern. Persistent objects are, from one point of view, messages that a model wrote in the past so that it could read it in the future. So it is often the case that looking at common messaging patterns is useful.
In this approach, we would load an instance of the builder (created for us by a factory, implemented by the domain model), which has an API that allows the repository to pass the domain agnostic data to the domain model. Again, the message builder, being provided by the domain model, knows the domain invariant and can validate it.
All the validation for the domain object is done in the constructor. Should I just inject a factory into the repository?
That should be fine. You'll want to keep the coupling as small a you can manage (think interface), and you'll want to think about the fact that the factory is now part of the public API for the model (think about how to change the factory so that it remains backwards compatible).
ORM and Data Mapper
Most .NET ORM frameworks use a data mapper internally, or can be considered a kind of data mapper themselves. Some ORMs in other tech stacks may use the alternative approach to mapping: Active Record. The difference between the two is that in Active Record, the business class knows how to persist itself while a Data Mapper makes it agnostic from any persistence mechanism.
This is a different distinction than the one between having a Data Model + a Domain Model and mapping straight to the Domain Model. Regardless if you map directly to the Domain or if you have an intermediate Data Model, you will always have a Data Mapper. Both approaches imply it. The Data Mapper can be an ORM tool or custom code.
It's not clear from your question if you intend to use both an ORM and an additional Data Mapper on top, but I wouldn't recommend trying that or calling your Data Model <=> Domain Model mapping logic a "Data Mapper" in the PoEAA sense of the term.
Object hydration and invariants
Oftentimes, domain object hydration by a Data Mapper goes through another path than normal use-case driven code in order to bypass all validation that would otherwise happen. This can be done via parameterless constructors with restricted scope (protected, internal with internalsVisibleTo) that the ORM or custom code will have access to. Some newer reflection-based frameworks can also access private fields.
With a Data Model in addition to your Domain Model, this is all simpler since you don't need to restrict access to a Data Model object the way you would a Domain entity. Data Models don't have invariants, you can safely leave their parameterless constructor and properties publicly accessible and settable for rehydration. The only thing you need to take care of is have some FromData() method or constructor on your domain entity that takes a data model as an input and instantiates the entity. Again, the technique is made safe thanks to accessibility level restriction - the hydration code can typically be in an assembly granted internalsVisibleTo by the domain.

Onion Architecture: Should we allow data annotations in our domain entities?

I am looking to implement the Onion Architecture into our ASP.NET MVC application. I understand the need to separate View Models from Domain Entities, however I am finding myself writing redundant code. There is redundant code because my view models and domain entities look exactly the same with the exception that my view models have the [Serializable] data annotation. I need these models serializable because I am using ASP.NET Session State in which the State Server needs objects to be serializable.
I personally feel the domain entities should NOT be serializable because it would then become dependent on a particular technology. However, how can I avoid redundant code?
I should add that my service methods are dependent on these serializable data models.
I would avoid annotating my domain objects with anything persistence or non-domain related. This way, my Domain project wouldn't depend on another layer and I won't have it cluttered with things that aren't relevant to the Domain. While we need to bend the rules, I prefer bending them in a way not involving dependency on a persistence detail.
The point is to keep the layers focused on their purpose because it's very easy to mix'em up and create (in time) the big ball of mud.
In your case, I have the feeling you don't really have a rich domain or it's improperly modeled. It seems you only have data structures and your needs are CRUDy.
If you are certain the app won't evolve to become more complex i.e it will be just data structure manipulations then you can have one model to use them for all the purposes. Basically you can cut corners and use the 'business' model for everything. No need for abstractions and other stuff.
But if you think the app will evolve or they are or will be business rules and processes i.e you'll need to model behaviour as perceived by the business, then it's best to keep things very decoupled, even if at this stage they seem to be identical.
To make it easier, for your view model you can define one (with copy paste) and use automapper to map the business object to the view model one. Other approach maybe that your query service/repository/object could return directly that view model (map the query result to the view model)
Viewmodels can contain domain entities/models. My domain entities are partial classes and all (eventually) inherit from a base entity which is serialized. Since I use my domain models within some of my viewmodels, I use data annotations on the domain models as well. Your domain model library should not depend on/reference anything else (domain driven).
I wouldn't call [Serializable] a data annotation per se, since it's part of the core .Net platform (mscorlib.dll). Data Annotations refers to specific attributes used to perform data validation and other operations in ASP.Net or Entity Framework.
Should an Entity be [Serializable] ? Probably not, but I don't think this attribute is as "adulterating" to your domain layer as other attributes that come from external, web- or persistence-oriented libraries. It doesn't tie you to a particular third-party framework.
The whole "redundant code" issue depends IMO on the type of system you're building. In a genuine DDD application, duplication between Domain entities and View Models will probably not be all that blatant, and the benefits of a separate presentation model will typically outweigh the costs. A good article on that subject : Is Layering Worth The Mapping ?

Entities in shared layer (cross cutting concern) in a layered application?

In a layered application, is it good practice to have you entities defined in a shared layer? I figure that I will be using them across all layers. Or do they belong in the business layer?
MSDN's layered application guideline puts the business entities in the business layer
The Layered Architecture Sample for .NET puts the entities in the shared layer
Can it be like this?
Presentation
Business
Data
Shared
Entities
Or must it be like this
Presentation
Business
Entities
Data
Shared
What to do and why?
I usually organize projects in following structure:
Presentation (MVC application)
try to keep your controllers small as possible. Do not put any business logic into controllers. Relay on service interfaces instead concrete implementations. Use dependency injection.
Business layer
service classes belong here and they should contain all business logic
i group related services into folders by feature. Each service queries the DB with entity framework and maps the results into Model (a.k.a. View Models, Presentation Objects) objects. So the service layer does not return DB entities but return POCO classes.
shared folder contains services which are shared across multiple services (they are more like infrastructure code but i prefer to keep them inside the business/service project)
DAL data access layer
I prefer to use only entity framework without any other abstraction upon it. Some people use Repositories or implementing own unit of work pattern, but i do not recommend to do this. Entity framework is already implementing unit of work and encapsulating database selects with linq so there is no need for more abstraction.
this layer contains only Code First classes (entity framework entities)
I would say it depends if these entities contain business logic or not.
From the Layered Application Guidelines :
Business entities also validate the data contained within the entity
and encapsulate business logic to ensure consistency and to implement
business rules and behavior.
In contrast, the Layered Architecture Solution Guidance seems to rely on code generation to create Entities, they are mere data containers with little to no logic in them.
Rich domain entities tend to not be in a Shared module, since it would mean carrying around a ton of behavior that you don't want everyone to have (imagine being able to manipulate business logic directly on the client side...) Anemic ones on the contrary are lightweight and may be happily and conveniently distributed everywhere.
My approach is a little bit different. In data layer I store all my entities and in shared layer I have DTO object (Domain Transfer Objects) which are exact copy of entities but without Entity Framework control. To map each other, I'm using mapper (AutoMapper) which is fluent and easy to use.
I can't understand why Entity Framework doesn't support interfaces, using only instances.

Repository Pattern with multiple tables

Recently I have been reading up on using the repository pattern and DI to help create easily testable code, I think I understand it for the most part. However I'm having difficulty with one issue. I need to create a Rules object for my applications business layer. To create a rule, I need the ability to read and write to two tables. How would you go about implementing a repository that uses two tables for one object?
for example:
ICollection<type> GetAllRules();
What would I put in for type as it requires two tables?
Thanks
Steve
I wouldn't insist on having a repository for that.
As Fowler says
Conceptually, a Repository encapsulates the set of objects persisted in a data store and the operations performed over them, providing a more object-oriented view of the persistence layer.
This is probably why most implementations tend to expose pure domain objects rather than derivatives (which your Rule object seems to be).
I would have two repositories for the two tables you mention, then I would have a unit of work to expose all repositories and then I would have a business layer service responsible for the compound processing.
An advantage of such approach would be that the repository layer remains clean, there is no business processing involved here, no unclear rules introduced to the persistence layer.

In which layer entities class must be defined?

When creating a Business Model in which layer (GUI, BLL, DAL) entities class must be defined?
Your entities are a part of your Business Logic. In your entities you define your business rules.
They should be ignorant of the type of Data Access you use. This can be done by using the Repository pattern. In your BLL you define your Repository interfaces which act on your entities. In a separate infrastructure project, you will define an implementation for the Repositories.
If you pass your entities to your GUI is a matter of choice. Sometimes it can be beneficial to use special crafted classes for passing data to your view but in a small project you could opt for passing your entities directly to your GUI.
You can define it in two places
Either create a new Layer Model/Entities (preferred)
or
Define them in Data Access Layer
I would say: in their own layer. The GUI, the business layer and the data access layer all use the entities. But the GUI doesn't depend on the data access layer, and the data access layer doesn't depend on the service layer. So entities must be in their own domain layer.
It depends from the way you want to use your entity. If it's simple POCO object, that used as DTO from db to you application, so i think that best place will be DAL. If you want to use your entity like part of business logic and it has some functional - so BLL will be the best place. But I don't think that there are some cases when it should be used and defined in GUI
I think that it's a good practice to have an ViewModel for any GUI purposes. Because when you use EF than it means that you interact with SQL somehow (in most cases). So you data is normalized. For there other hand many times you need denormalized data for GUI. That why i prefer to use ViewModel for GUI.

Categories

Resources