I'm working on a new project and wanted to use MVC and entity framework. For the purpose of separation of concerns. I plan to structure my project like so..
MyProject.Web (this project houses the V and C of MVC)
MyProject.Model (this project houses the M of MVC so that it can be reused and shared. This is where my business objects/ domain objects live)
MyProject.BLL (this is where I write my business logic and make available via an interface)
MyProject.Entity (this is my DAL - where entity objects will be generated by the wizard using db first approach)
My question is what is the best way to convert entity object to business object in my BLL? My mapping requires that I join two tables and compute a sum that would map to a field in a business object.
Not sure this is a good example but let say I join customer table and order table and got two records back for the same customer. One is for order placed in the AM and one for order placed in the PM. I need to compute the total for the day and mapped it to a field in my business object.
Automapper comes to mind instead of manually code DTO, but I am not sure if it can do complex mapping (with the sum calculation)?
Is there a way to manually configure custom map in EF 6 to do this?
With Automapper, you can use Projections for complex mappings or use AutoMapper's QueryableExtensions helper methods.
See Aggregations section in the following link.
LINQ can support aggregate queries, and AutoMapper supports LINQ
extension methods
https://github.com/AutoMapper/AutoMapper/wiki/Queryable-Extensions
Related
I'm quite frustrated trying to figure out a possible implementation.
Pursuant DDD and CQS concepts the write side of my application uses various aggregates and associated repositories that are implemented with EF Core.
On the read side I want to use queries that in some cases link related data. This could be just the resolution of an id (i.e. the name of the user that made the last edit) or for performance reasons a list of child objects (i.e. address book of a person).
Therefore the structure of the returned object (GetPersonDTO) including resolutions or children is different of the write object (Person). I use value objects as types for all properties in the entities to keep validation in one spot (always valid objects).
My problems are on the read side. The returned resource representation from a GET request is JSON. Permissions associated with the requesting subject decide if a field is included in the JSON.
My idea was that I use EF Core to return a query object and a permission object that holds the field permissions of that object for the current subject (user). If the subject has read permission for a certain field it will be mapped to the DTO. The DTO uses Optional<T> and a custom JsonConverter as shown here. As a result all Optional<T> fields that are not set will not be included in the JSON response, but it preserves fields that are set to NULL.
I write the queries in EF Core using raw SQL, because I didn't manage to write more complex queries using LINQ. EF Core requires keyless entities for raw SQL queries. I expected EF Core to convert the read fields back into value objects using the converters that have been created for the write side.
But keyless entities cannot be principal end of relationship hence they cannot have owned entities. As various GitHub issues show it is not yet possible that EF Core recreates the object graph from a raw SQL query. It is stated that
In EF each entityType maps to one database object. Same database
object can be used for multiple entityTypes but you cannot specify a
database object to materialize a graph of object.
If I understand correctly it is also not possible to achieve this with a view or stored procedure. It sounds to me that it is also not possible to define another fully connected GetPerson object that uses existing DbSet objects.
How can this be implemented? What are the alternatives?
I can think of
a) using a flat object with primitive types for the raw SQL result and then use it to map to the DTO. A side effect of creating the object graph with the original is that creating the value objects validate this data from the DB. So I either have to trust the database or I need to call the validation methods that are public in my value objects manually.
b) forget EF Core and use ADO.NET. The return object is then the ADO.NET record. Considering the permissions the fields of the record will then be mapped to the DTO. This is simple with less overhead, but requires another part of the framework.
Are there any other options? How have you solved returning a combined object considering field permissions?
EF6 core does not support persisting value objects, this is a feature planned in EF7: https://learn.microsoft.com/en-us/ef/core/what-is-new/ef-core-7.0/plan#value-objects
The purpose of an ORM such as EF, is to allow programmers to manipulate the RDBMS through objects rather than a text based language such as SQL. This object model is not a business model, but symbols of RDBMS concepts (class = table, row = object, column = property, ...). For trivial applications, you can confuse your models, but you will quickly find yourself limited because a business model has different constraints than a database schema. For larger applications, you write a persistence model, consisting of DPO that match your database structure, and translate that model to other models in the infrastructure layer. Decoupling your domain model from your persistence model allows more flexibility in your application, and re-hydrating your domain objects as a polysemic model, limiting side effects of independent use cases.
An example here would be to have a normalized RDBMS with tables including surrogate keys that are hidden from the domain model in the projection process done by the repository. This allows your database to resolve the complexity of relational mapping, while re-hydrating value objects without identities in the domain layer.
For queries, you should not publish your entities with a GetPerson model. The domain layer's purpose is to protect your application from violating any business rules. When querying your application state, you do not modify the state of the application, and cannot violate any rule. A domain model is only useful for state changing use cases. Therefore, when handling a query, you should directly map your DTO from the DPO. You will save performance and allow your DTO to project directly sort/filter/paging features to the database through libraries such as AutoMapper, as long as the projection is in this library's translation capacity. Also your business rules implementation will not impact / be impacted by large and complex query models, which is the initial purpose of a CQS architecture.
Whether you manipulate your database through an ORM such as EF, or as raw SQL queries manipulated directly at ADO.NET level is an implementation detail of your infrastructure layer. Choice depends on whether you think you can write "better" queries than the ORM, "better" being a subjective matter depending on your project constraints.
Update 1: Regarding mapping to your DTO with Optional<T>, EF core has limited ability to map relational data into a model that does not simply represent the database schema. This is by design and you shouldn't force EF to try to restore data into the DTO directly. Using a single model adds adherence between your API interface and database persistence scheme. You would risk interface breaking changes each time you update the persistence schema, and vice-versa. You should have two different models to decouple presentation from persistence.
Whether you use EF core or ADO.NET to read the database does not change much conceptually. In both cases you read database information into a memory model, then translate that model into a DTO. The difference is whether this in-memory model is based on OOP (EF + DPO model) or a key-value table (ADO.NET + datarow).
Pros of using EF core, is that it is less prone to SQL injection as queries are generated and values are always escaped for you. Also, the persistence model can be translated into DTO through a mapping library, such as AutoMapper. AutoMapper makes the translation easier and cheaper to write and maintain. Also, it can project some of the translation into the relational model.
If you manage to model your security into the map profile, your database could only select columns required for data exposition in the DTO. In other words, if the user is not allowed to expose DTO.Name, then the database would not include Table.Name into the select statement. This is not always possible to do, though, but it is much easier done this way than writing "clever" queries in SQL.
One downside of EF, however, is that it is slower than ADO.NET.
If you really need to split your query into a two phase transformation (mapping and security), you should put the security layer closer to the database, unless the translation logic requires that data to map accordingly.
This is a bit of subjective and best-practice question, but I'll answer with how I've solved a similar problem - given that I actually understand your question properly.
As long as you've mapped the database model fully using navigation properties, it is possible to generate very complex queries without resorting to raw queries.
var dto = await context.Persons
.Where(p => p.Id == id)
.Select(p => new GetPersonDTO
{
Id = p.Id,
InternallyVerifiedField = !p.UsersWithAccess.Contains(currentUser) ? new Optional<int>(p.InternallyVerifiedField) : new Optional<int>(),
ExternallyVerifiedField = permissions.Contains(nameof(GetPersonDTO.ExternallyVerifiedField)) ? new Optional<int>(p.ExternallyVerifiedField) : new Optional<int>()
})
.SingleOrDefaultAsync();
In this example the InternallyVerifiedField will depend on some query inline, and ExternallyVerifiedField will depend on some external permission object. The benefit of ExternallyVerifiedField is that it might be optimized out from the expression before even reaching the sql server, if the user does not have permission.
If you want to build the dto object from a fully connected object it can still be done in one query similar to
var dto = await context.Persons
.Where(p => p.Id == id)
.Select(p => new
{
permissions = new GetPersonDTOPermissions
{
FieldA = context.Permissions.Where(...)
},
person = p
})
.SingleOrDefaultAsync();
But with this solution you need to manually craft the dto from the graph object person given the resulting permissions, as long as you start with context and add a filter using Where the query will be inlined.
Let's say I have a set of classes that I want to share across multiple projects. For instance, I could use them in a REST service and also in a client that consumes that service.
So I create the following projects:
MyOrders.Models
MyOrders.RestApi
MyOrders.Client
Both the RestApi and Client projects have dependencies on the Models project.
The RestApi is using Entity Framework (code first) so normally you'd decorate the model's properties with things like [NotMapped] and [Key]. However, I don't want the Client solution to have any dependency on Entity Framework. None. So I can't decorate the models' properties with EF-specific attributes.
So my question is, is there some way to correctly set the models' EF-specific attributes from the RestApi project instead, maybe in the Context's constructor or something?
You can have the POCOs in your Models project, keep them totally ignorant of Entity Framework, and do the mappings in a separate project or in the RestApi project itself.
You can do this by the fluent mapping API, for instance in the OnModelCreating override of the context that you create in the EF-aware project:
modelBuilder.Entity<Order>().HasKey(o => o.OrderID);
modelBuilder.Entity<Order>().Ignore(o => o.OrderTotal);
etc.
This is a good argument for using custom Data Transfer Objects that are independent of the table-like entities. Although it can feel like overkill to have nearly duplicate classes - one as DTOs and one as EF Entities - there is another long-range benefit: the two sets of classes can vary independently. Let's say that you change the table table structure, but the client doesn't need to know about this change. Update the EF Entity but you leave the DTO alone, though you may have to update how you map from EF to DTO.
Speaking of mapping: EmitMapper can be a great help in transferring between the two types of objects.
You need to split your data access models from the rest of the application using Data Transfer Objects.
This will give a lot of benefits. At first it will look if your duplicating all the code of the model. But when your application grows, you will find that need the data in a view which is formatted in another way than how it was or is stored the database. Validation attributes can be added in a very specific way just the way you need it.
Mapping in between them can be done various ways. By hand or by using a tool like AutoMapper
Please help on choosing the right way to use the entities in n-tier web application.
At the present moment I have the following assembleis in it:
The Model (Custom entities) describes the fields of the classes that the application use.
The Validation is validating the data integrity from UI using the reflection attributes method (checks data in all layers).
The BusinessLogicLayer is a business facade for additional logic and caching that use abstract data providers from DataAccessLayer.
The DataAccessLayer overrides the abstarct data providers using LinqtoSql data context and Linq queries. And here is the point that makes me feel i go wrong...
My DataLayer right before it sends data to the business layer, maps (converts) the data retrieved from DB to the Model classes (Custom entities) using the mappers. It looks like this:
internal static model.City ToModel(this City city)
{
if (city == null)
{
return null;
}
return new model.City
{
Id = city.CountryId,
CountryId = city.CountryId,
AddedDate = city.AddedDate,
AddedBy = city.AddedBy,
Title = city.Title
};
}
So the mapper maps data object to the describing model. Is that right and common way to work with entities or do I have to use the data object as entities (to gain a time)? Am I clear enough?
You could use your data entities in your project if they are POCOs. Otherwise I would create separate models as you have done. But do keep them in a separate assembly (not in the DataAccess project)
But I would not expose them through a webservice.
Other suggestions
imho people overuse layers. Most applications do not need a lot of layers. My current client had a architecture like yours for all their applications. The problem was that only the data access layer and the presentation layer had logic in them, all other layers just took data from the lower layer, transformed it, and sent it to the layer above.
The first thing I did was to tell them to scrap all layers and instead use something like this (requires a IoC container):
Core (Contains business rules and dataaccess through an orm)
Specification (Seperated interface pattern. Contains service interfaces and models)
User interface (might be a webservice, winforms, webapp)
That works for most application. If you find that Core grows and becomes too large too handle you can split it up without affecting any of the user interfaces.
You are already using an ORM and have you thought about using a validation block (FluentValidation or DataAnnotations) for validation? Makes it easy to validate your models in all layers.
It may be a common practice to send out DTOs from serivce boundary (WCF service, etc.) but if you are directly using your "entities" in your presentation model, I don't see any benefit in doing that.
As to the code snippet you have provided, why not use AutoMappter? It helps by eliminating writing of boiler-plate mapping codes and does that for you if you have a set of convention in place.
Get rid of the model now, before removing it later will require refactoring the whole application. The last project i worked on used this architecture and maintaining the DTO layer and mappings to the database model layer is a huge pain in the arse and offers no usefull benefits. One of the main things that is anoying is that LinkToSql does not effectively support a disconnected data model. You cannot update a database table by creating a new DB entity with a primary key matching an existing record and then stick it into the data context. You have to first retrieve the entity from the database, update it then commit the changes. Managing this results in really nasty update methods to map all the properties from your DTOs to your LinqtoSql classes. It also breaks the whole deferred execution model of LinqToSql. Don't even get me started on the problems it causes with properties on parent classes that are collections of child DTOs (e.g. a customer DTO with an Orders property that contains a collection of order DTOs), managing those mappings is really really fiddly, i had to do some extensive optimisations because retrieving a few hundred records ended up causing LinqToSql to make 200,000 database calls (admittedly there was also some pretty dumbass code as well but you get the picture).
The only valid reason to use DTOs is if you want to have multiple pluggable Data Access Layers e.g. LinqToSql and NHibernate for supporting different DB servers. That way you can swap out the data access later without having to change any other layers. If you don't need to do this then save yourself a world of pain and just use the LinqToSql entities.
I'm making my first database program, with Sql Express. Currently I'm using Linq-to-Sql for data access, and my repository classes return "entity" type objects. Meaning; I extend the dbml entity classes to use as my business object classes. Now I want to make this more separated; and have POCO bussiness objects.
This is where I wonder about what different solutions may exist. It looks to me like I need to manually map property-by-property, each entity class into domain class, in the repositories. I have so far about 20 tables with total few hundred columns. Now.. I just want to verify if this is a common/typical approach that you still use? And if there are alternatives without introducing excessive complexity, what would that be?
Before creating your mappings manually, have a look at AutoMapper
AutoMapper is an object-object mapper.
Object-object mapping works by
transforming an input object of one
type into an output object of a
different type. What makes AutoMapper
interesting is that it provides some
interesting conventions to take the
dirty work out of figuring out how to
map type A to type B. As long as type
B follows AutoMapper's established
convention, almost zero configuration
is needed to map two types.
AutoMapper is a good tool to use to perform class-to-class conversions. However, I'm thinking of a DAL that combines Linq2Sql and AutoMapper, and I'm thinking why not just go with Fluent NHibernate? It's very easy to set up, works on just about any database including SqlExpress, and there is a Linq provider that integrates pretty seamlessly. All of this is free open-source code, and very commonly-used so there's ample documentation and support.
If you want to stay with Linq2Sql but have a more full-featured domain model, you could consider deriving your domain model from the DTOs. That would allow you to have the business logic in the domain, with the properties passed up to the DTO. However, understand that the Linq2SQL objects will not be able to be directly cast to domain objects; you'll need a constructor in the domain that takes a DTO and copies the info into the domain (requiring at least a one-way mapping of DTO to domain). However, the domain can be treated like a DTO (because a class is always its parent) so the reverse conversion isn't necessary; just hand the domain class to the repository where it would expect the DTO.
So I'm just getting started with Entity Framework. I'm working with a very large, existing database. I find myself wanting to use EF to create models that are "slices" of the whole database. These slices corresponde to 1 aspect of the application. Is that the right way to look at it, or should I try to model the whole database in 1 EDMX?
Let me give you a fictional example:
Suppose that 1 of the many things that this database contains is customer billing information. I feel like I want to create an EF model that just focuses on the tables that the Customer Billing module needs to interact with. (so then that model would NOT be used for other modules in the app, rather, those same tables might appear in other small EF models). This would allow me to leverage EF's conceptual model features (inheritance, etc) to build a view that is correct for Customer Billing, without worrying about that model's effects, on say Customer Support (even though the 2 modules share some tables)
Does that sound right?
It sounds right to me. The point of an Entity Model, after all, is to provide a set of persistence-capable business objects at a level of abstraction that's appropriate to the required business logic.
You should absolutely create entity models that support modules of the application, not models that copy the underlying database schema. As the link above describes, separating logic from persistence is one of the primary purposes of EF.
I would prefer to use a slice approach, based of following reasons:
If you have a massive database with loads of tables, then it would be difficult to manage massive Entity Model.
It is easier to maintain application / domain specific entities, as entity framework is not a table to entity mapping, you can create custom entities and also combine and split tables across entities.