Should entity objects be exposed by the repository? - c#

I have an repository which implements interface IRepository. The repository performs queries on the Entity Framework (on behalf of) the application and directly returns the entity object produced.
The whole point in implementing IRepository is so that it can be switched out for different repositories in the future. However returning the exact entity objects as returned by the Entity Framework will break this. Is this acceptable?
Therefore should the repository be converting all Entity Framework objects into business objects prior to exposing them to the application? Should such objects implement an interface or have a common base type?

The repository interface should deal only with business/domain entities, that is the repository sends and receives only objects known by the app, objects that aren't related to the underlying peristence access implementation.
EF or Nhibernate entities are modelling the persistence data NOT the domain ones. So IRepository should not return an object which is an implementation detail of the ORM, but an object that can be used directly by the app (either a domain entity or a simplified view model, depending on the operation).
In the repository implementation, you deal with ORM entities which will be mapped to the corresponding app entities (usually with a mapper such as AutoMapper). Long story short, when designing IRepository forget all about its implementation. That's why is better to design the interface before deciding if/what ORM will be used.
Basically, the repository is the gateway between the app domain context and the persitence context and the app SHOULD NOT be coupled to the implementation details of the repository.

You should look at using one of the POCO templates for generating your entities. That way your entities have no special dependencies on Entity Framework, and can be passed freely between layers. It saves a whole lot of effort compared to maintaining a completely separate domain model and mapping between the two (unless your domain model would be significantly different from your entity model, in which case it would make more sense).

If you are using POCO entities, you can assume that any provider will do a similar job. Also, remember that you are returning entities, which have their properties mapped to database. So you can assume that unless entities have different property names for each provider(I can not find a logical explanation of having different names), you can return them from repository to business directly.

Related

How to retrieve ID generated by DB after saveChanges using DbContextScope

I plan to create an application having the following layers, and use Entity Framework as my ORM:
Presentation: not relevant for my question.
Business: in this layer I plan to only use DTO objects. I want to have my business layer separated from any database implementation details and therefor all interaction with entities is done in the DAL layer.
DAL: in this layer I plan to have all Entity Framework code and the entities. I will introduce repositories which can be called by the Business layer. These repositories expect DTO objects as input, and return DTO objects as output. Mapping between DTO and entity is done in this layer.
I have looked at quite some online tutorials, and articles related to Entity Framework and came across the DbContextScope project which seems like a very nice solution to control a 'business transaction' and ensure all related changes are committed or rolled back. See GitHub: https://github.com/mehdime/DbContextScope
The demo in that GitHub repository contains a scenario where a new entity is created in the database. When I try to map that scenario to my layers, it seems to go like this:
Business: create DTO with property values for the entity to be stored. Create new DbContextScope and call repository in the DAL layer passing the DTO.
DAL: the repository maps the DTO to an entity and add its to the DbContext of Entity Framework.
Business: call the SaveChanges() method on DbContextScope which in its turn calls the SaveChanges() on the DbContext of Entity Framework.
In the demo the ID of the entity being stored is already known when the DTO is created. However I am looking for a way to determine the ID automatically assigned by EF once the SaveChanges() method on the DbContextScope is called in the business layer. Since I am in the business layer at this point, I do not have access to the entity anymore, hence I cannot reach the ID property of that entity anymore.
I guess I can only determine the ID by querying the database for the record just created, but this is only possible if the original DTO contains some unique identifier I could use to query the database. But what if I do not have a unique value in my DTO I can use to query?
Any suggestions on how to solve this, or do you recommend an alternative approach to my layers? (e.g. use entities in business layer as well - despite this sounds like the wrong thing to do)
I use Mehdime's context scope wherever I can, as I've found it to be an exceptional implementation of a unit of work. I agree with Camilo's comment about the unnecessary separation. If EF is trusted to serve as your DAL then it should be trusted to work as designed so that you can leverage it completely.
In my case, my controllers manage the DbContextScope and I utilize a repository pattern in combination with a DDD design for my entities. The repository serves as the gate keeper for the interactions with the context scoped and located with the DbContextLocator. When it comes to creating entities, the repository serves as the factory with a "Create{X}" method where {X} represents the entity. This ensures that all required information needed to create the entity is provided, and the entity is associated with the DbContext before being returned so that the entity is guaranteed to always be in a valid state. This means that ones the context scope SaveChanges call is made, the bounding service has the entity with it's assigned ID automatically. ViewModels / DTOs are what the controller returns to the consumer. You also do have the option to call the DbContext's SaveChanges within the boundary of the DbContextScope which will also reveal IDs prior to the context scope SaveChanges. This is more of a very edge-case scenario for when you want to fetch an ID for loosely coupled entities. (No FK/mapped relationship) The repository also services "Delete" code to ensure all related entities, rules, and such are managed. While editing entities falls under DDD methods on the entity itself.
There may be a more purist argument that this "leaks" details of the domain or EF specific concerns into the controller, but my personal opinion is that the benefits of "trusting" entities and EF within the scope of the bounded context within the service layer far, far, outweighs anything else. It's simpler, and allows you a lot of flexibility in your code without the need for near-duplicate methods propagating to supply consumers with filtered data, or complex filtering logic to "hide" EF from the service layer. The basic rule I follow is entities are never returned outside of the boundary of their context scope. (No detach/reattach, just Select into ViewModels, and managing Create/Update/Delete on entities based on passed in view models/parameters.)
If there are more specific concerns / examples you can provide, please feel free to add some code outlining where you see those issues.

Data Access Layer and DDD

I am trying to learn the idea of Domain Driven Design and trying to figure out where should we put the Database persistence code.
Going through the book "Implementing Domain Driven Design by Vaughn Vernon", I understand that the Repository or the Database operations (along with connections etc.,) have to be kept in the Model project (Domain Model Context) itself. Is my understanding correct?
I am referring to his sample of IdentityAccessContext, AgilePMContext etc., where there are interfaces to the repository. I had always grown with an idea of having a separate Data Layer and adding one here leads me to circular dependency. Because the Interfaces, Entities are declared in Model which is required for the Data layer and the Data layer needs to be referred from model for persistence.
Is this correct or am I missing something?
Going through the book "Implementing Domain Driven Design by Vaughn
Vernon", I understand that the Repository or the Database operations
(along with connections etc.,) have to be kept in the Model project
(Domain Model Context) itself. Is my understanding correct?
Repository abstractions (interfaces) should be in the Domain layer, but their concrete implementations are in an Infrastructure layer.
See his code samples for the book : https://github.com/VaughnVernon/IDDD_Samples/tree/master/iddd_agilepm/src/main/java/com/saasovation/agilepm, for instance TeamRepository interface in domain/model and LevelDBTeamMemberRepository in port/adapter/persistence.
There's no circular reference because persistence is strongly coupled to Domain but Domain is only loosely coupled to persistence if it ever needs it (thanks to Inversion of Control, most of the time achieved by Dependency Injection).
I wouldn't say there's a data access layer per se.
Actually, repositories translate domain into data and viceversa using a data mapper.
Models, services or transaction scripts must use repositories to both get and persist objects. There's no circular reference here:
Service layer injects repository/repositories to implement domain operations. That is, a service will know what to do if it requires objects and it's not focused on how they're translated into data.
Repositories use a data mapper to persist and get data (1).
Data mapper usually is an OR/M like Entity Framework, NHibernate, Dapper...
Furthermore, a good DDD will enforce inversion of control and this will mean that:
Upper layers know about lower layers.
Lower layers don't own any reference to upper layers.
In summary, DDD doesn't have a DAL like you think, but it tries to abstract and encapsulate each concern in order to let upper layers be decoupled from the underlying data approach.
About the circular reference thing...
OP said:
Because the Interfaces, Entities are declared in Model which is
required for the Data layer and the Data layer needs to be referred
from model for persistence.
Double check your statement. If models represent higher layer of abstraction than data, why data should reference models?
It would be the model who requires the data. And the data should be accessed using interfaces to let the models be agnostic about the data access strategy. This is called inversion of control, as I said previously on this answer.
Maybe you think that data layer requires a reference to entities, because you're still thinking the old way, but if you practice a good separation of concerns and you work with a data mapper (or you implement it yourself), the data mapper just maps objects to raw data and viceversa, and it doesn't require a reference to concrete classes like domain objects (you should ask yourself how Entity Framework can persist your entities to your favourite database engine without even knowing about these entities (2)).
Anyway, as I said before on this answer too, you shouldn't think about a DAL but a bunch of repositories and a data mapper, or just repositories (see exception (1)).
Usually dependent lower layers are instantiated and given to upper layers using dependency injection pattern (a possible implementation of inversion of control).
(1) Exception to this rule: if we're no talking about a relational storage, maybe there's no data mapper at all, and repository implements its interface by accessing the data with lower level of abstraction/encapsulation. For example, if the repository implements its operations to store data in XML, then it would use an XDocument or XmlDocument internally and there's no actual data mapper at all.
(2) Actually OR/M frameworks like Entity Framework can know how's your model by configuration. While it doesn't require a reference to your concrete domain to compile, it requires you to provide class maps using Code First or other approaches)

DDD - Restricting repository to create only certain entity

I have one important question about repository and entity. Should I restrict repository to create specific entity/aggregate root (via generic repositories like BaseRepository)?
At this moment, base repository have access to database factory object (not DbFactory but custom) to retrieve any POCO (not just related to aggregate root). So, technically, I can create any entity from any repository. Obviously, as a programmer I don't do it but it's definitely possible. So, is it necessary to restrict repository and allow it to create specific entity? Note that some entities have sub entities as well. So, if I restrict repository to create one entity (via BaseRepository) then how to create sub entities?
As #Jonas suggests in his answer, I'd create one repository per aggregate root. These should hide all persistence detail. This means taking domain entities as parameters and returning domain entities. Usually mapping from ORM entity to domain entity within the repository. As a side-effect, this also makes you think about what data you need, reducing some of the horrors you can encounter in DDD dealing with entities that have lazy-loaded properties.
I'd avoid the generic repository pattern, like you say in your original post, in DDD you want your code to document your design intention, you don't want to provide code that allows clients/callers to load any entity from your database. Also, most of your entities will most likely be built from many tables/resources, which doesn't apply well to the generic repository pattern.
I would consider it clear to have a Repository for each aggregate root in your Bounded Context.
It will make it obvious what an aggregate root is in your application opposed to what are (sub) entities. This way you are protecting yourself and others from harming how aggregates are accessed and used through repositories.

Why EF Entities are making a dependency and the need of POCO

I'm pretty new to IoC, Dependency Injection and Unit Testing. I'm starting a new pet project and I'm trying to do it right.
I was planning to use the Repository pattern to mediate with the data. The objects that I was going to return from the repositories were going to be objects collected from a Linq to entities data context (EF4).
I'm reading in "Dependency Injection" from Mark Seeman that doing it, makes an important dependency and will definitely complicate the testing (that's what he's using POCO objects in a Library Project).
I'm not understanding why. Although the objects are created by a linq to entities context, I can create them simply calling the constructor as they were normal objects. So I assume that is possible to create fake repositories that deviler these objects to the caller.
I'm also concerned about the automatic generation of POCO classes, which is not very easy.
Can somebody bring some light? Are POCO objects trully necessary for a decoupled and testable project?
**EDIT: Thanks to Yuck I understand that it's better to avoid autogeneration with templates, which brings me to a design question. If I come from a big legacy database wich his tables are assuming a variety of responsabilities (doesn't fit well with the concept of a class with a single responsability), what's the best way to deal with that?
Delete the database is not an option ;-)
No, they're not necessary it just makes things easier, cleaner.
The POCO library won't have any knowledge that it's being used by Entity Framework. This allows it to be used in other ways - in place of a view model, for instance. It also allows you to use the same project on both sides of a WCF service which eliminates the need to create data transfer objects (DTO).
Just two examples from personal experience but there are surely more. In general the less a particular object or piece of code knows about who is using it or how it's being used will make it more adaptable and generic for other situations.
You also mention automatic generation of POCO classes. I don't recommend doing this. Were you planning to generate the class definitions from your database structure?
I was planning to use the Repository pattern to mediate with the data.
The objects that I was going to return from the repositories were
going to be objects collected from a Linq to entities data context
(EF4).
The default classes (not the POCOs) EF generates contain proxies for lazy loading and are tied at the hip to Entity Framework. That means any other class that wants to use those classes will have to reference the required EF assemblies.
This is the dependency Mark Seeman is talking about. Since you are now dependent on these non-abstract types, which in turn are dependent on EF, you cannot simply change the implementation of your repository to something different (i.e. just using your own persistence store) without addressing this change in the class that depend on these types.
If you are truly only interested in the public properties of the EF generated types then you can have the partial classes generated by EF implement a base interface. Put all the properties you need in that base interface and pass the dependency in as the base interface - now you only depend on the base interface and not EF anymore.

Repository; Mapping between POCOs / Linq-to-Sql entity classes

I'm making my first database program, with Sql Express. Currently I'm using Linq-to-Sql for data access, and my repository classes return "entity" type objects. Meaning; I extend the dbml entity classes to use as my business object classes. Now I want to make this more separated; and have POCO bussiness objects.
This is where I wonder about what different solutions may exist. It looks to me like I need to manually map property-by-property, each entity class into domain class, in the repositories. I have so far about 20 tables with total few hundred columns. Now.. I just want to verify if this is a common/typical approach that you still use? And if there are alternatives without introducing excessive complexity, what would that be?
Before creating your mappings manually, have a look at AutoMapper
AutoMapper is an object-object mapper.
Object-object mapping works by
transforming an input object of one
type into an output object of a
different type. What makes AutoMapper
interesting is that it provides some
interesting conventions to take the
dirty work out of figuring out how to
map type A to type B. As long as type
B follows AutoMapper's established
convention, almost zero configuration
is needed to map two types.
AutoMapper is a good tool to use to perform class-to-class conversions. However, I'm thinking of a DAL that combines Linq2Sql and AutoMapper, and I'm thinking why not just go with Fluent NHibernate? It's very easy to set up, works on just about any database including SqlExpress, and there is a Linq provider that integrates pretty seamlessly. All of this is free open-source code, and very commonly-used so there's ample documentation and support.
If you want to stay with Linq2Sql but have a more full-featured domain model, you could consider deriving your domain model from the DTOs. That would allow you to have the business logic in the domain, with the properties passed up to the DTO. However, understand that the Linq2SQL objects will not be able to be directly cast to domain objects; you'll need a constructor in the domain that takes a DTO and copies the info into the domain (requiring at least a one-way mapping of DTO to domain). However, the domain can be treated like a DTO (because a class is always its parent) so the reverse conversion isn't necessary; just hand the domain class to the repository where it would expect the DTO.

Categories

Resources