I have a scenario in which I have some custom entities being used in a system (desktop) bound to its UI. I have shifted to Entity framework for the benefits it provides but the will continue to use the custom entities to bring data from UI as the custom entities are tightly coupled with the system.
Now if I want to remove the dependence of the system on these custom entities like say I want to use my services from the web or any other platform what are my options from design perspective?
I think that to remove dependence on custom entities I will have to use a Data Transfer Object say POCO.
So should I use the POCO entities that EF provides for this purpose or write them separately??
Does it make sense. What should be my approach.
Even if domain objects are implemented as POCO they're still domain objects and shouldn't be transfered to other physical tiers without using a DTO.
Think Entity Framework entities are proxies of your POCO-style domain objects in order to - for example - inject change tracking and lazy-loading. Also, a domain object might contain more data than required in some parts of your UI.
For example, you've implemented a grid with 3 columns: name, second name and age. You bind user profiles to the so-called grid, but your Profile domain object has also more properties, and this will transfer more data than required by your 3-columns-grid!
That's why you'll want to keep your domain in your domain, and data emitted by your services will serialize DTOs to cover your actual UI use cases and you won't be transferring fat objects over the wire, which might add extra costs to your organization (usually you pay for network usage in hosting environments), and obviously, serializing small objects is less intensive, isn't it?
A clean architecture will have a class library assembly (let's call it Common) containing your POCO objects. As per definition, POCO objects are persistence ignorant and contain no behaviour. They just represent your entities.
In a separate assembly (let's call it DataAccess) reference the Common assembly and create the mappings for EntityFramework, using the DbContext and EntityTypeConfiguration<T> classes. This obviously means that you won't use any EDMX file but create your mappings with the fluent interface, which is anyway the best way to use EF.
At this point you have reusable and decoupled objects in an assembly and your data access logic and mappings in another assembly.
On top of this you can throw an IoC container to keep things even more decoupled but I think it's a bit off topic.
Related
Update
My research is telling me that I should be using a Data Mapper: https://martinfowler.com/eaaCatalog/dataMapper.html. Are Data Mappers injected into repositories like this: http://www.rantdriven.com/post/2009/09/01/Using-the-Repository-Pattern-with-the-Command-and-DataMapper-Patterns.aspx and this: Repository and Data Mapper pattern or are they used as an alternative to a repository? All the examples I find seem to use Data Mappers to map DataReader objects to lists of domain objects. I am wanting to map Persistent objects to Domain objects.
Original Question
I am trying to build a Domain Model that is completely isolated from the Data Model after reading articles like this:
http://blog.sapiensworks.com/post/2012/04/07/Just-Stop-It!-The-Domain-Model-Is-Not-The-Persistence-Model.aspx. There are only six of us working on this system at the moment, however this could increase in future to 9+. However, I am beginning to think that this is not the right approach. I read a lot of questions on here, which appear to tell me to map the ORM directly to the Domain Model.
I recently asked this question: https://softwareengineering.stackexchange.com/questions/365643/should-the-data-model-be-identical-to-the-domain-model-for-mapping-purposes. One of the answerers says: "I believe the mapping in between both should be within a (persistence-oriented) repository". How do you do this mapping? I don't believe I should be using AutoMapper in the repository because of the reasons stated here: Repository pattern and mapping between domain models and Entity Framework and here: http://enterprisecraftsmanship.com/2016/02/08/specification-pattern-c-implementation/ i.e. I cannot simply do this in the repository:
public Customer getId()
{
CustomerData customerData = customerRepository.getById(id);
return Mapper.Map<CustomerDomain>(customerData);
}
I cannot do this because the invariants of the domain object will not be considered. How can I return a domain object from the repository. Do I inject a factory into the repository, which will take parameters from the data model and return a domain model? Is this even the right approach or is there another pattern for mapping data objects to domain objects?
Quick review of repositories, as defined by Evans
REPOSITORIES (provide) the means of finding and retrieving persistent objects while encapsulating the immense infrastructure involved.
(REPOSITORIES) provide the illusion of an in memory collection...
So, the API interface should normally be expressed in a domain specific vocabulary; neither the application layer nor the domain model should need to know any specifics beyond what is expressed in the API.
Commonly, the immense infrastructure means a domain agnostic persistence appliance; we don't persist domain objects, we persist bytes. With some appliances, we work very closely with the byte representation -- think streaming data to and from files. In other cases, the appliance provides a abstraction of those bytes - an RDBMS gives us an API that understands rows in tables, and encapsulates the details of how the bytes are arranged.
What this means is that somewhere we need a transformation from a domain agnostic representation to a domain specific representation, and vice versa.
Normally, these take the form of functions.
toDomainModel: JsonDocument -> DomainModel
toJson: DomainModel -> JsonDocument
These functions are normally defined by, and invoked by, the repository implementation -- they are part of the immense infrastructure that Evans describes.
I cannot do this because the invariants of the domain object will not be considered.
There are a couple possibilities here.
One, of course, would be to get a smarter mapper.
A second possibility is to model the unvalidated representation of the model as a distinct thing from a validated representation.
Example: consider a model for Money, that requires an Amount and a CurrencyCode; and the semantics of your model requires that Amount be a positive number and CurrencyCode be an entry in a fixed collection of tokens. There's nothing wrong with having an UnvalidatedMoney type, that has no semantic restrictions, and a function that converts UnvalidatedMoney to Money (enforcing the invariant).
This is analogous to what Scott Wlaschin describes for modeling a verified email address.
Note that this isn't necessarily an undue burden; if you have an invariant on some domain concept like money, you probably already have validation to do when passing new input from the world into your model. So the work is mostly about making that validation element re-usable.
A third possibility is to look into the Builder pattern. Persistent objects are, from one point of view, messages that a model wrote in the past so that it could read it in the future. So it is often the case that looking at common messaging patterns is useful.
In this approach, we would load an instance of the builder (created for us by a factory, implemented by the domain model), which has an API that allows the repository to pass the domain agnostic data to the domain model. Again, the message builder, being provided by the domain model, knows the domain invariant and can validate it.
All the validation for the domain object is done in the constructor. Should I just inject a factory into the repository?
That should be fine. You'll want to keep the coupling as small a you can manage (think interface), and you'll want to think about the fact that the factory is now part of the public API for the model (think about how to change the factory so that it remains backwards compatible).
ORM and Data Mapper
Most .NET ORM frameworks use a data mapper internally, or can be considered a kind of data mapper themselves. Some ORMs in other tech stacks may use the alternative approach to mapping: Active Record. The difference between the two is that in Active Record, the business class knows how to persist itself while a Data Mapper makes it agnostic from any persistence mechanism.
This is a different distinction than the one between having a Data Model + a Domain Model and mapping straight to the Domain Model. Regardless if you map directly to the Domain or if you have an intermediate Data Model, you will always have a Data Mapper. Both approaches imply it. The Data Mapper can be an ORM tool or custom code.
It's not clear from your question if you intend to use both an ORM and an additional Data Mapper on top, but I wouldn't recommend trying that or calling your Data Model <=> Domain Model mapping logic a "Data Mapper" in the PoEAA sense of the term.
Object hydration and invariants
Oftentimes, domain object hydration by a Data Mapper goes through another path than normal use-case driven code in order to bypass all validation that would otherwise happen. This can be done via parameterless constructors with restricted scope (protected, internal with internalsVisibleTo) that the ORM or custom code will have access to. Some newer reflection-based frameworks can also access private fields.
With a Data Model in addition to your Domain Model, this is all simpler since you don't need to restrict access to a Data Model object the way you would a Domain entity. Data Models don't have invariants, you can safely leave their parameterless constructor and properties publicly accessible and settable for rehydration. The only thing you need to take care of is have some FromData() method or constructor on your domain entity that takes a data model as an input and instantiates the entity. Again, the technique is made safe thanks to accessibility level restriction - the hydration code can typically be in an assembly granted internalsVisibleTo by the domain.
I am looking to implement the Onion Architecture into our ASP.NET MVC application. I understand the need to separate View Models from Domain Entities, however I am finding myself writing redundant code. There is redundant code because my view models and domain entities look exactly the same with the exception that my view models have the [Serializable] data annotation. I need these models serializable because I am using ASP.NET Session State in which the State Server needs objects to be serializable.
I personally feel the domain entities should NOT be serializable because it would then become dependent on a particular technology. However, how can I avoid redundant code?
I should add that my service methods are dependent on these serializable data models.
I would avoid annotating my domain objects with anything persistence or non-domain related. This way, my Domain project wouldn't depend on another layer and I won't have it cluttered with things that aren't relevant to the Domain. While we need to bend the rules, I prefer bending them in a way not involving dependency on a persistence detail.
The point is to keep the layers focused on their purpose because it's very easy to mix'em up and create (in time) the big ball of mud.
In your case, I have the feeling you don't really have a rich domain or it's improperly modeled. It seems you only have data structures and your needs are CRUDy.
If you are certain the app won't evolve to become more complex i.e it will be just data structure manipulations then you can have one model to use them for all the purposes. Basically you can cut corners and use the 'business' model for everything. No need for abstractions and other stuff.
But if you think the app will evolve or they are or will be business rules and processes i.e you'll need to model behaviour as perceived by the business, then it's best to keep things very decoupled, even if at this stage they seem to be identical.
To make it easier, for your view model you can define one (with copy paste) and use automapper to map the business object to the view model one. Other approach maybe that your query service/repository/object could return directly that view model (map the query result to the view model)
Viewmodels can contain domain entities/models. My domain entities are partial classes and all (eventually) inherit from a base entity which is serialized. Since I use my domain models within some of my viewmodels, I use data annotations on the domain models as well. Your domain model library should not depend on/reference anything else (domain driven).
I wouldn't call [Serializable] a data annotation per se, since it's part of the core .Net platform (mscorlib.dll). Data Annotations refers to specific attributes used to perform data validation and other operations in ASP.Net or Entity Framework.
Should an Entity be [Serializable] ? Probably not, but I don't think this attribute is as "adulterating" to your domain layer as other attributes that come from external, web- or persistence-oriented libraries. It doesn't tie you to a particular third-party framework.
The whole "redundant code" issue depends IMO on the type of system you're building. In a genuine DDD application, duplication between Domain entities and View Models will probably not be all that blatant, and the benefits of a separate presentation model will typically outweigh the costs. A good article on that subject : Is Layering Worth The Mapping ?
I've built a web application with Entity Framework using POCO.
I'm using these POCO classes as my business objects and not just for persisting data which works fine until...
Now I need to add some logic into these classes to do thing like total up sales, order lines, etc.
Should I add methods to my POCO classes to enable this functionality or leave them purely for persisting data and create some kind of 'processor' whereby I pass in the business objects and get the values I require out.
Is there a best practice for this?
What is the architectural design you are using or want to use?
For example, if these are your domain entities, you should put as much as possible logic in them. If they are merely data containers and you don't have a real architecture in place, your logic would probably in some business component.
So if you provide your question with some more details, we can help you better.
I'm pretty new to IoC, Dependency Injection and Unit Testing. I'm starting a new pet project and I'm trying to do it right.
I was planning to use the Repository pattern to mediate with the data. The objects that I was going to return from the repositories were going to be objects collected from a Linq to entities data context (EF4).
I'm reading in "Dependency Injection" from Mark Seeman that doing it, makes an important dependency and will definitely complicate the testing (that's what he's using POCO objects in a Library Project).
I'm not understanding why. Although the objects are created by a linq to entities context, I can create them simply calling the constructor as they were normal objects. So I assume that is possible to create fake repositories that deviler these objects to the caller.
I'm also concerned about the automatic generation of POCO classes, which is not very easy.
Can somebody bring some light? Are POCO objects trully necessary for a decoupled and testable project?
**EDIT: Thanks to Yuck I understand that it's better to avoid autogeneration with templates, which brings me to a design question. If I come from a big legacy database wich his tables are assuming a variety of responsabilities (doesn't fit well with the concept of a class with a single responsability), what's the best way to deal with that?
Delete the database is not an option ;-)
No, they're not necessary it just makes things easier, cleaner.
The POCO library won't have any knowledge that it's being used by Entity Framework. This allows it to be used in other ways - in place of a view model, for instance. It also allows you to use the same project on both sides of a WCF service which eliminates the need to create data transfer objects (DTO).
Just two examples from personal experience but there are surely more. In general the less a particular object or piece of code knows about who is using it or how it's being used will make it more adaptable and generic for other situations.
You also mention automatic generation of POCO classes. I don't recommend doing this. Were you planning to generate the class definitions from your database structure?
I was planning to use the Repository pattern to mediate with the data.
The objects that I was going to return from the repositories were
going to be objects collected from a Linq to entities data context
(EF4).
The default classes (not the POCOs) EF generates contain proxies for lazy loading and are tied at the hip to Entity Framework. That means any other class that wants to use those classes will have to reference the required EF assemblies.
This is the dependency Mark Seeman is talking about. Since you are now dependent on these non-abstract types, which in turn are dependent on EF, you cannot simply change the implementation of your repository to something different (i.e. just using your own persistence store) without addressing this change in the class that depend on these types.
If you are truly only interested in the public properties of the EF generated types then you can have the partial classes generated by EF implement a base interface. Put all the properties you need in that base interface and pass the dependency in as the base interface - now you only depend on the base interface and not EF anymore.
Please help on choosing the right way to use the entities in n-tier web application.
At the present moment I have the following assembleis in it:
The Model (Custom entities) describes the fields of the classes that the application use.
The Validation is validating the data integrity from UI using the reflection attributes method (checks data in all layers).
The BusinessLogicLayer is a business facade for additional logic and caching that use abstract data providers from DataAccessLayer.
The DataAccessLayer overrides the abstarct data providers using LinqtoSql data context and Linq queries. And here is the point that makes me feel i go wrong...
My DataLayer right before it sends data to the business layer, maps (converts) the data retrieved from DB to the Model classes (Custom entities) using the mappers. It looks like this:
internal static model.City ToModel(this City city)
{
if (city == null)
{
return null;
}
return new model.City
{
Id = city.CountryId,
CountryId = city.CountryId,
AddedDate = city.AddedDate,
AddedBy = city.AddedBy,
Title = city.Title
};
}
So the mapper maps data object to the describing model. Is that right and common way to work with entities or do I have to use the data object as entities (to gain a time)? Am I clear enough?
You could use your data entities in your project if they are POCOs. Otherwise I would create separate models as you have done. But do keep them in a separate assembly (not in the DataAccess project)
But I would not expose them through a webservice.
Other suggestions
imho people overuse layers. Most applications do not need a lot of layers. My current client had a architecture like yours for all their applications. The problem was that only the data access layer and the presentation layer had logic in them, all other layers just took data from the lower layer, transformed it, and sent it to the layer above.
The first thing I did was to tell them to scrap all layers and instead use something like this (requires a IoC container):
Core (Contains business rules and dataaccess through an orm)
Specification (Seperated interface pattern. Contains service interfaces and models)
User interface (might be a webservice, winforms, webapp)
That works for most application. If you find that Core grows and becomes too large too handle you can split it up without affecting any of the user interfaces.
You are already using an ORM and have you thought about using a validation block (FluentValidation or DataAnnotations) for validation? Makes it easy to validate your models in all layers.
It may be a common practice to send out DTOs from serivce boundary (WCF service, etc.) but if you are directly using your "entities" in your presentation model, I don't see any benefit in doing that.
As to the code snippet you have provided, why not use AutoMappter? It helps by eliminating writing of boiler-plate mapping codes and does that for you if you have a set of convention in place.
Get rid of the model now, before removing it later will require refactoring the whole application. The last project i worked on used this architecture and maintaining the DTO layer and mappings to the database model layer is a huge pain in the arse and offers no usefull benefits. One of the main things that is anoying is that LinkToSql does not effectively support a disconnected data model. You cannot update a database table by creating a new DB entity with a primary key matching an existing record and then stick it into the data context. You have to first retrieve the entity from the database, update it then commit the changes. Managing this results in really nasty update methods to map all the properties from your DTOs to your LinqtoSql classes. It also breaks the whole deferred execution model of LinqToSql. Don't even get me started on the problems it causes with properties on parent classes that are collections of child DTOs (e.g. a customer DTO with an Orders property that contains a collection of order DTOs), managing those mappings is really really fiddly, i had to do some extensive optimisations because retrieving a few hundred records ended up causing LinqToSql to make 200,000 database calls (admittedly there was also some pretty dumbass code as well but you get the picture).
The only valid reason to use DTOs is if you want to have multiple pluggable Data Access Layers e.g. LinqToSql and NHibernate for supporting different DB servers. That way you can swap out the data access later without having to change any other layers. If you don't need to do this then save yourself a world of pain and just use the LinqToSql entities.