How to retrieve ID generated by DB after saveChanges using DbContextScope - c#

I plan to create an application having the following layers, and use Entity Framework as my ORM:
Presentation: not relevant for my question.
Business: in this layer I plan to only use DTO objects. I want to have my business layer separated from any database implementation details and therefor all interaction with entities is done in the DAL layer.
DAL: in this layer I plan to have all Entity Framework code and the entities. I will introduce repositories which can be called by the Business layer. These repositories expect DTO objects as input, and return DTO objects as output. Mapping between DTO and entity is done in this layer.
I have looked at quite some online tutorials, and articles related to Entity Framework and came across the DbContextScope project which seems like a very nice solution to control a 'business transaction' and ensure all related changes are committed or rolled back. See GitHub: https://github.com/mehdime/DbContextScope
The demo in that GitHub repository contains a scenario where a new entity is created in the database. When I try to map that scenario to my layers, it seems to go like this:
Business: create DTO with property values for the entity to be stored. Create new DbContextScope and call repository in the DAL layer passing the DTO.
DAL: the repository maps the DTO to an entity and add its to the DbContext of Entity Framework.
Business: call the SaveChanges() method on DbContextScope which in its turn calls the SaveChanges() on the DbContext of Entity Framework.
In the demo the ID of the entity being stored is already known when the DTO is created. However I am looking for a way to determine the ID automatically assigned by EF once the SaveChanges() method on the DbContextScope is called in the business layer. Since I am in the business layer at this point, I do not have access to the entity anymore, hence I cannot reach the ID property of that entity anymore.
I guess I can only determine the ID by querying the database for the record just created, but this is only possible if the original DTO contains some unique identifier I could use to query the database. But what if I do not have a unique value in my DTO I can use to query?
Any suggestions on how to solve this, or do you recommend an alternative approach to my layers? (e.g. use entities in business layer as well - despite this sounds like the wrong thing to do)

I use Mehdime's context scope wherever I can, as I've found it to be an exceptional implementation of a unit of work. I agree with Camilo's comment about the unnecessary separation. If EF is trusted to serve as your DAL then it should be trusted to work as designed so that you can leverage it completely.
In my case, my controllers manage the DbContextScope and I utilize a repository pattern in combination with a DDD design for my entities. The repository serves as the gate keeper for the interactions with the context scoped and located with the DbContextLocator. When it comes to creating entities, the repository serves as the factory with a "Create{X}" method where {X} represents the entity. This ensures that all required information needed to create the entity is provided, and the entity is associated with the DbContext before being returned so that the entity is guaranteed to always be in a valid state. This means that ones the context scope SaveChanges call is made, the bounding service has the entity with it's assigned ID automatically. ViewModels / DTOs are what the controller returns to the consumer. You also do have the option to call the DbContext's SaveChanges within the boundary of the DbContextScope which will also reveal IDs prior to the context scope SaveChanges. This is more of a very edge-case scenario for when you want to fetch an ID for loosely coupled entities. (No FK/mapped relationship) The repository also services "Delete" code to ensure all related entities, rules, and such are managed. While editing entities falls under DDD methods on the entity itself.
There may be a more purist argument that this "leaks" details of the domain or EF specific concerns into the controller, but my personal opinion is that the benefits of "trusting" entities and EF within the scope of the bounded context within the service layer far, far, outweighs anything else. It's simpler, and allows you a lot of flexibility in your code without the need for near-duplicate methods propagating to supply consumers with filtered data, or complex filtering logic to "hide" EF from the service layer. The basic rule I follow is entities are never returned outside of the boundary of their context scope. (No detach/reattach, just Select into ViewModels, and managing Create/Update/Delete on entities based on passed in view models/parameters.)
If there are more specific concerns / examples you can provide, please feel free to add some code outlining where you see those issues.

Related

DDD and references between aggregates in EFCore and C#

I have an issue that I am not sure how to solve when DDD is assumed and using C#/EF Core.
Simplified situation: We have 2 aggregates - Item and Warehouse. Each of them has its identity by ExternalId(Guid) to identify it outside (FE etc) which is also treated as its domain identity. It also has database Id that identifies it inside databases model - Entity model and Db model is the same class as EF Core allows to use private fields - only the ExternalId, and required fields are exposed. Entity (both in DDD and EF Core sense) contain quite a lot of business logic and methods strictly coupled to the object. In general I follow the pattern from eShop/eShopOnContainers example.
Item is assigned to Warehouse and when creating an item we need to pass Warehouse to its constructor.
Is it proper to pass full Warehouse object to Item's constructor (but also to other methods that Item defines):
public Item(Warehouse warehouse,..)
or should I relay on database Id only :
public Item(long warehouseId,..)
I have an issue about this, because from one one side I read that aggregates should not reference other aggregates, but on the other hand using Datbase DB leaks the implementation detail (object persitsance in relational DB) to domain model which should not happen in my opinion.
Using ExternalId:
public Item(Guid warehouseId,..)
does not solve the problem as actual relations in db do not base on it.
What is your opinion ? I am a bit puzzled.
Usually you would create a Value Object for the Id of the Aggregate Root. It is one possibility to rely on a Id generated by the database. If you decide to let the Db generate the Id, then you will need to work with that.
But why would you need to pass the Warehouse reference or Id anyways? It looks like Item is an Entity and Warehouse is the Aggregate Root that should contain that Entity. In general you should not create an Entity outside of the Aggregate Root.
Edit: There are several identity creation strategies as Vaughn Vernon describes in the red book. One of them is let the persistance mechanism such as a SQL Db generate the unique identifier of an entity or aggregate.
Your domain model created during analysis is often different from the one created during design. Semantically they are both the same, you are passing references, but the design model recognises that you have to persist the data so you might not want to pre-load all referenced objects for performance reasons, whether that is simply loading it from disk within the same domain, or from a remote service in another domain.

DDD principles and repositories with Dapper

I'm setting up a solution in which I apply onion architecture and DDD patterns.
One of the DDD principles encourages domain entities to have only private setters and a private default constructor, to make sure you cannot create domain entities in an invalid state.
Repositories contain the data operations on domain entities, which are mapped from/to the database. I have been trying the following two approaches:
Domain entities in a purist way: no default constructor, no public setters; validation is done in the constructor(s); which makes sure you cannot create a domain entity in an invalid state. The side effect is that it's harder to dematerialize them in the repositories in read operations; as you need reflection in order to be able to create instances and map properties; and the use of dynamics in the Dapper requests which need to be mapped to the actual domain entities. If I would map this directly to the domain entities without the use of dynamics, Dapper throws an exception that there is no public constructor.
Domain entities in a non-purist way: you allow a default constructor, and all setters are public; so you can create entities that are not valid at a given point in time. In that case you need to call the Validate() method manually, to make sure they are valid before you continue. This makes dematerilizing in the repositories much easier, as you don't need reflection nor dynamics to map the database to the model.
Both methods work, however, with option 2 the repositories become a lot simpler because they contain a lot less custom mapping code, and without reflection obviously will be more performant as well. Of course, DDD is not applied in a purist way.
Before a decide what I will use in my project, a question: are there any other micro-ORM frameworks out there that can handle private constructors and setters, so that mapping the database to those kind of 'pure' domain entities is supported without additional custom mapping logic? (No EF nor NHibernate, I want something lightweight).
Or other technial solutions to keep the 'pure' model entity approach in combination with easy repository mapping?
EDIT: the solution I implemented was the following.
First, constructors and setters in the domain entities are all 'internal', which means they cannot be set by the consumers of the domain model. However, I use 'InternalsVisibleTo' to allow the data access layer to access them directly, so this means that dematerializing from the database is very easy with Dapper (no need for intermediate models). From the application layer, I can only use domain methods to change the domain entity, not the properties directly.
Second, to construct new domein entities from my application layer, I added fluent builders to help building domain entities, so I can now construct them like:
User user = new UserBuilder()
.WithSubjectId("045454857451245")
.WithDisplayName("Bobby Vinton")
.WithLinkedAccount("Facebook", la => la.WithProviderSubjectId("1548787788877").WithEmailAddress("bobby1#gmail.com"))
.WithLinkedAccount("Microsoft", la => la.WithProviderSubjectId("54546545646").WithEmailAddress("bobby2#gmail.com"))
When the builder 'builds' the entity, validation is done as well, so you can never create an entity in an invalid state.
One of the DDD principles encourages domain entities to have only private setters and a private default constructor, to make sure you cannot create domain entities in an invalid state.
That's not quite right. Yes, rich domain models don't usually expose setters, but that is because they don't need setters. You tell the model what to do at a higher level of abstraction, and allow it to determine how its own data structures should be modified.
Similarly, there are often cases where it makes sense to expose the default constructor: if you think of an aggregate as a finite state machine, then the default constructor is a way to initialize the aggregate in its "start" state.
So normally you reconstitute an aggregate in one of two ways: either you initialize it in its default state, and then send it a bunch of messages, or you use the Factory pattern, as described in the blue book.
this means an extra mapping in between, which makes code more complex
Perhaps, but it also ensures that your domain code is less dependent on ORM magic. In particular, it means that your domain logic can be operating on a different data structure than what is used to make persistence "easy".
But it isn't free -- you have to describe in code how to get values out of the aggregate root and back into the database (or into the ORM entity, acting as a proxy for the database).
The key is that you don't use Dapper to work with your domain entities, but instead you use it inside your repository layer with POCO entities. Your repository methods will return Domain entities by converting the POCO entities (that Dapper uses for data access) to Domain entities.

DDD - Restricting repository to create only certain entity

I have one important question about repository and entity. Should I restrict repository to create specific entity/aggregate root (via generic repositories like BaseRepository)?
At this moment, base repository have access to database factory object (not DbFactory but custom) to retrieve any POCO (not just related to aggregate root). So, technically, I can create any entity from any repository. Obviously, as a programmer I don't do it but it's definitely possible. So, is it necessary to restrict repository and allow it to create specific entity? Note that some entities have sub entities as well. So, if I restrict repository to create one entity (via BaseRepository) then how to create sub entities?
As #Jonas suggests in his answer, I'd create one repository per aggregate root. These should hide all persistence detail. This means taking domain entities as parameters and returning domain entities. Usually mapping from ORM entity to domain entity within the repository. As a side-effect, this also makes you think about what data you need, reducing some of the horrors you can encounter in DDD dealing with entities that have lazy-loaded properties.
I'd avoid the generic repository pattern, like you say in your original post, in DDD you want your code to document your design intention, you don't want to provide code that allows clients/callers to load any entity from your database. Also, most of your entities will most likely be built from many tables/resources, which doesn't apply well to the generic repository pattern.
I would consider it clear to have a Repository for each aggregate root in your Bounded Context.
It will make it obvious what an aggregate root is in your application opposed to what are (sub) entities. This way you are protecting yourself and others from harming how aggregates are accessed and used through repositories.

If Entity Framework / DbContext is the DAL / Repository, where does it fit within 3-tier architecture?

I've been reading articles on StackOverflow and other sites all day about best architecture practices and there are just so many conflicting ideas and opinions.
I've finally settled on an approach, but I am having a really hard time deciding where to place the EF objects (DbContext, Fluent APIs, Seeding data, etc). Here is what I currently have:
ASP.NET MVC Project: The actual web project. Contains the standard views, controllers and View Models (inside a Models folder).
Domain Model Project: Contains all POCO classes that define the database (domain) objects. Currently, does not mention or reference any EF objects.
Service Layer Project: Contains service objects for each type of domain object (e.g., IProductService, IOrderService, etc). Each service references EF objects like DbSets and handles business rules - e.g., add a Product, fetch a Product, append a Product to an Order, etc.
So the question is, in this configuration, where do EF classes go? Initially I thought in the Service Layer, but that doesn't seem to make sense. I then thought to put them in the Domain Model Layer, but then it ties the Domain Models to EF, which is essentially a DAL / Repository. Finally, I thought about creating a separate DAL Project just for EF, but it seems like a huge waste considering it will likely have 3-4 files in it (DbContext and a few other small files).
Can anyone provide any guidance?
There is no need for Domain Model since it will be redundancy. EF classes directly can act as Domain Model and they are converted to View Models while sending it to View. EF can be separated into different class library. Most of them use repository pattern along with any ORM incase it would be easy if they go for replacement. But I've seen criticism over using repository pattern, check this out.
Here is what I do:
Data:
Has one class inheriting from DbContext.
It has all the db sets.
Overrides OnModelCreating.
Mapping primary keys and relationships.
Entities:
Has every POCO classes.
Each property is decorated with needed data annotations.
Services:
Each service has common methods (GetList(), Find(), Create(), etc.).
Business:
Called from clients, orchestrate using services to perform a specific task UserChangePassword (this will check if this can be performed, then perform the task, or return error/unauthorized statuses among many others to make the client shows the correct information regarding the task. This on my case is where I log.
Clients (Desktop/Web/Wpf/etc).
I'm not saying this is the best approach, I'm just sharing what's been working for me.

Should entity objects be exposed by the repository?

I have an repository which implements interface IRepository. The repository performs queries on the Entity Framework (on behalf of) the application and directly returns the entity object produced.
The whole point in implementing IRepository is so that it can be switched out for different repositories in the future. However returning the exact entity objects as returned by the Entity Framework will break this. Is this acceptable?
Therefore should the repository be converting all Entity Framework objects into business objects prior to exposing them to the application? Should such objects implement an interface or have a common base type?
The repository interface should deal only with business/domain entities, that is the repository sends and receives only objects known by the app, objects that aren't related to the underlying peristence access implementation.
EF or Nhibernate entities are modelling the persistence data NOT the domain ones. So IRepository should not return an object which is an implementation detail of the ORM, but an object that can be used directly by the app (either a domain entity or a simplified view model, depending on the operation).
In the repository implementation, you deal with ORM entities which will be mapped to the corresponding app entities (usually with a mapper such as AutoMapper). Long story short, when designing IRepository forget all about its implementation. That's why is better to design the interface before deciding if/what ORM will be used.
Basically, the repository is the gateway between the app domain context and the persitence context and the app SHOULD NOT be coupled to the implementation details of the repository.
You should look at using one of the POCO templates for generating your entities. That way your entities have no special dependencies on Entity Framework, and can be passed freely between layers. It saves a whole lot of effort compared to maintaining a completely separate domain model and mapping between the two (unless your domain model would be significantly different from your entity model, in which case it would make more sense).
If you are using POCO entities, you can assume that any provider will do a similar job. Also, remember that you are returning entities, which have their properties mapped to database. So you can assume that unless entities have different property names for each provider(I can not find a logical explanation of having different names), you can return them from repository to business directly.

Categories

Resources