I'm at a bit of a loss here.
I have a one-to-many relationship between Project and Task and another between Task and TaskEvent.
A Task only exists in the context of a Project and once assigned to a Project can't be changed to belong to another Project.
A business rule states that a Task can only be deleted, and therefore removed from the collection of Tasks that belong to a certain project, if the Task has no TaskEvents captured against it.
How do I specify this in Entity Framework? I'm using Self-Tracking entities, but actually I'm at a loss as to where to define this kind of rule in general. We have other rules that are db ignorant, but how does one define a business rule, preferably that exists in isolation from the entity classes as they are regenerated, as a class of their own with a single responsibility?
I'm thinking I'll have to implement some sort of validator that can use reflection to pick up these 'rule' classes based on the type of the object being validated and then have them each perform their validations.
But how do I push the object context into that? Should my validator have an instance of the object context and then pass it through to each rule as it is executed?
And even more flustering, how do I detect the deletes? Will I have to call up the old version of the Project and do a comparison of it's old tasks and current tasks and then check all the deleted ones to make sure they have not TimeEvents captured?
What are the drawbacks to this method and what else can you suggest?
EDIT: I should specify that we're using an n-tier design and both the client apps(MVC and Silverlight) hit WCF services to do anything useful. This is obviously the layer we want to implement the validation in, although if we could use those rules that aren't db specific on the clients that would be great. We're using DataAnnotations for those validations at present.
Thanks
Have a look at this.
Since you are using the n-tier design , my suggestion is to use the EF as a ORM layer rather than a full substitute for domain model. You should create a separate BLL layer that contains business rules and map the domain model to the EF classes. If the mapping is not complicated it can be done manually else you can use tools such as Automapper to perform the mapping for you.
We ended up using a service layer which encapsulated the rules and validated entities based on rule contexts.
For each action there was a context and a set of rules that were associated through the use of an attribute. All associated rules could therefore target the entity of a specific type required by the service action.
The rules were identified using reflection and tested on the service call.
Entity Framework entities still acted as a slightly thinner domain model than I would have liked but we never ran into serious issues and the tracking provided by EF actually helped make some previously impossible rules easy.
Related
I plan to create an application having the following layers, and use Entity Framework as my ORM:
Presentation: not relevant for my question.
Business: in this layer I plan to only use DTO objects. I want to have my business layer separated from any database implementation details and therefor all interaction with entities is done in the DAL layer.
DAL: in this layer I plan to have all Entity Framework code and the entities. I will introduce repositories which can be called by the Business layer. These repositories expect DTO objects as input, and return DTO objects as output. Mapping between DTO and entity is done in this layer.
I have looked at quite some online tutorials, and articles related to Entity Framework and came across the DbContextScope project which seems like a very nice solution to control a 'business transaction' and ensure all related changes are committed or rolled back. See GitHub: https://github.com/mehdime/DbContextScope
The demo in that GitHub repository contains a scenario where a new entity is created in the database. When I try to map that scenario to my layers, it seems to go like this:
Business: create DTO with property values for the entity to be stored. Create new DbContextScope and call repository in the DAL layer passing the DTO.
DAL: the repository maps the DTO to an entity and add its to the DbContext of Entity Framework.
Business: call the SaveChanges() method on DbContextScope which in its turn calls the SaveChanges() on the DbContext of Entity Framework.
In the demo the ID of the entity being stored is already known when the DTO is created. However I am looking for a way to determine the ID automatically assigned by EF once the SaveChanges() method on the DbContextScope is called in the business layer. Since I am in the business layer at this point, I do not have access to the entity anymore, hence I cannot reach the ID property of that entity anymore.
I guess I can only determine the ID by querying the database for the record just created, but this is only possible if the original DTO contains some unique identifier I could use to query the database. But what if I do not have a unique value in my DTO I can use to query?
Any suggestions on how to solve this, or do you recommend an alternative approach to my layers? (e.g. use entities in business layer as well - despite this sounds like the wrong thing to do)
I use Mehdime's context scope wherever I can, as I've found it to be an exceptional implementation of a unit of work. I agree with Camilo's comment about the unnecessary separation. If EF is trusted to serve as your DAL then it should be trusted to work as designed so that you can leverage it completely.
In my case, my controllers manage the DbContextScope and I utilize a repository pattern in combination with a DDD design for my entities. The repository serves as the gate keeper for the interactions with the context scoped and located with the DbContextLocator. When it comes to creating entities, the repository serves as the factory with a "Create{X}" method where {X} represents the entity. This ensures that all required information needed to create the entity is provided, and the entity is associated with the DbContext before being returned so that the entity is guaranteed to always be in a valid state. This means that ones the context scope SaveChanges call is made, the bounding service has the entity with it's assigned ID automatically. ViewModels / DTOs are what the controller returns to the consumer. You also do have the option to call the DbContext's SaveChanges within the boundary of the DbContextScope which will also reveal IDs prior to the context scope SaveChanges. This is more of a very edge-case scenario for when you want to fetch an ID for loosely coupled entities. (No FK/mapped relationship) The repository also services "Delete" code to ensure all related entities, rules, and such are managed. While editing entities falls under DDD methods on the entity itself.
There may be a more purist argument that this "leaks" details of the domain or EF specific concerns into the controller, but my personal opinion is that the benefits of "trusting" entities and EF within the scope of the bounded context within the service layer far, far, outweighs anything else. It's simpler, and allows you a lot of flexibility in your code without the need for near-duplicate methods propagating to supply consumers with filtered data, or complex filtering logic to "hide" EF from the service layer. The basic rule I follow is entities are never returned outside of the boundary of their context scope. (No detach/reattach, just Select into ViewModels, and managing Create/Update/Delete on entities based on passed in view models/parameters.)
If there are more specific concerns / examples you can provide, please feel free to add some code outlining where you see those issues.
I'm setting up a solution in which I apply onion architecture and DDD patterns.
One of the DDD principles encourages domain entities to have only private setters and a private default constructor, to make sure you cannot create domain entities in an invalid state.
Repositories contain the data operations on domain entities, which are mapped from/to the database. I have been trying the following two approaches:
Domain entities in a purist way: no default constructor, no public setters; validation is done in the constructor(s); which makes sure you cannot create a domain entity in an invalid state. The side effect is that it's harder to dematerialize them in the repositories in read operations; as you need reflection in order to be able to create instances and map properties; and the use of dynamics in the Dapper requests which need to be mapped to the actual domain entities. If I would map this directly to the domain entities without the use of dynamics, Dapper throws an exception that there is no public constructor.
Domain entities in a non-purist way: you allow a default constructor, and all setters are public; so you can create entities that are not valid at a given point in time. In that case you need to call the Validate() method manually, to make sure they are valid before you continue. This makes dematerilizing in the repositories much easier, as you don't need reflection nor dynamics to map the database to the model.
Both methods work, however, with option 2 the repositories become a lot simpler because they contain a lot less custom mapping code, and without reflection obviously will be more performant as well. Of course, DDD is not applied in a purist way.
Before a decide what I will use in my project, a question: are there any other micro-ORM frameworks out there that can handle private constructors and setters, so that mapping the database to those kind of 'pure' domain entities is supported without additional custom mapping logic? (No EF nor NHibernate, I want something lightweight).
Or other technial solutions to keep the 'pure' model entity approach in combination with easy repository mapping?
EDIT: the solution I implemented was the following.
First, constructors and setters in the domain entities are all 'internal', which means they cannot be set by the consumers of the domain model. However, I use 'InternalsVisibleTo' to allow the data access layer to access them directly, so this means that dematerializing from the database is very easy with Dapper (no need for intermediate models). From the application layer, I can only use domain methods to change the domain entity, not the properties directly.
Second, to construct new domein entities from my application layer, I added fluent builders to help building domain entities, so I can now construct them like:
User user = new UserBuilder()
.WithSubjectId("045454857451245")
.WithDisplayName("Bobby Vinton")
.WithLinkedAccount("Facebook", la => la.WithProviderSubjectId("1548787788877").WithEmailAddress("bobby1#gmail.com"))
.WithLinkedAccount("Microsoft", la => la.WithProviderSubjectId("54546545646").WithEmailAddress("bobby2#gmail.com"))
When the builder 'builds' the entity, validation is done as well, so you can never create an entity in an invalid state.
One of the DDD principles encourages domain entities to have only private setters and a private default constructor, to make sure you cannot create domain entities in an invalid state.
That's not quite right. Yes, rich domain models don't usually expose setters, but that is because they don't need setters. You tell the model what to do at a higher level of abstraction, and allow it to determine how its own data structures should be modified.
Similarly, there are often cases where it makes sense to expose the default constructor: if you think of an aggregate as a finite state machine, then the default constructor is a way to initialize the aggregate in its "start" state.
So normally you reconstitute an aggregate in one of two ways: either you initialize it in its default state, and then send it a bunch of messages, or you use the Factory pattern, as described in the blue book.
this means an extra mapping in between, which makes code more complex
Perhaps, but it also ensures that your domain code is less dependent on ORM magic. In particular, it means that your domain logic can be operating on a different data structure than what is used to make persistence "easy".
But it isn't free -- you have to describe in code how to get values out of the aggregate root and back into the database (or into the ORM entity, acting as a proxy for the database).
The key is that you don't use Dapper to work with your domain entities, but instead you use it inside your repository layer with POCO entities. Your repository methods will return Domain entities by converting the POCO entities (that Dapper uses for data access) to Domain entities.
I've happily been using AutoMapper in a few projects and made use of .ReverseMap() when going from ViewModel to Model. I'd typically do the following:
// run at startup
// I'd customize the mapping if needed
Mapper.CreateMap<Model, ViewModel>().ReverseMap();
[HttpPost]
public ActionResult Create(SomeModel viewModel)
{
var data = Mapper.Map<Model>(viewModel);
_repo.Insert(data);
_uow.Save();
return View();
}
Then I find this article: http://lostechies.com/jimmybogard/2009/09/18/the-case-for-two-way-mapping-in-automapper/
I'm at a loose.
Is the article simply outdated or is there a better way?
A disclaimer first: There are all kinds of different domains and architectures and this answer may not apply to your design goals or architecture at all. You're of course free to use AutoMapper however you want. Also, I did not write the article, so I'm using my own experience with certain types of projects to come up with an answer.
The importance of the domain layer
First of all, the article in question assumes you are using some version of domain driven design. At the very least, I'd say it appeals to the idea of a domain that's a very important part of the project and should be "protected." The bit that best sums this idea up is:
...Because then our mapping layer would influence our domain model.
The author did not want artifacts outside of the domain layer updating the domain.
Why? Because:
The domain model is the heart of the project.
Modification to the domain layer should be a serious operation--the most important parts handled by the domain layer itself.
The article mentions a few problems with allowing the mapping piece of the solution to do domain model updates, including:
Force mutable, public collection , like public EntitySet<Category> Categories { get; } <- NO.
You might wonder why having a mutable, public collection is a bad thing--from a domain model perspective you probably don't want some external service blasting in (possibly invalid) categories whenever it wants.
A more sensible API for adding Categories in this case would be:
have AddCategory and RemoveCategory methods that the entity itself does some validation on before adding.
Expose an IEnumerable<Category> { get; } that could never be modified by outside consumers.
A project I worked on recently had a quite complex domain. Domain entities frequently were only updated after prospective update operations were run through several validation services (living in the domain).
We would have introduced a huge design problem had we allowed mapping back to domain entities.
Allowing AutoMapper (or another mapping project) to map directly to our domain entities would be subverting the logic in the domain entities (or more likely domain services that performed validation) and creating a kind of "back door" into the domain layer.
Alternative
Hopefully the commentary above provided a little help. Unfortunately the alternative when you're not doing automatic mapping is to use plain ol' =. At the very least, though, if you're in a DDD environment you'll be forced to think a little more about what should happen before a domain entity is created or updated.
However
... The .ReverseMap method exists. I think this article still rings true for a certain type of project. The built-in ability to automatically create a two-way mapping means that the library is able to handle applications beyond the target application.
As stated in the disclaimer, two-way mappings might make total sense for your application.
I've been reading articles on StackOverflow and other sites all day about best architecture practices and there are just so many conflicting ideas and opinions.
I've finally settled on an approach, but I am having a really hard time deciding where to place the EF objects (DbContext, Fluent APIs, Seeding data, etc). Here is what I currently have:
ASP.NET MVC Project: The actual web project. Contains the standard views, controllers and View Models (inside a Models folder).
Domain Model Project: Contains all POCO classes that define the database (domain) objects. Currently, does not mention or reference any EF objects.
Service Layer Project: Contains service objects for each type of domain object (e.g., IProductService, IOrderService, etc). Each service references EF objects like DbSets and handles business rules - e.g., add a Product, fetch a Product, append a Product to an Order, etc.
So the question is, in this configuration, where do EF classes go? Initially I thought in the Service Layer, but that doesn't seem to make sense. I then thought to put them in the Domain Model Layer, but then it ties the Domain Models to EF, which is essentially a DAL / Repository. Finally, I thought about creating a separate DAL Project just for EF, but it seems like a huge waste considering it will likely have 3-4 files in it (DbContext and a few other small files).
Can anyone provide any guidance?
There is no need for Domain Model since it will be redundancy. EF classes directly can act as Domain Model and they are converted to View Models while sending it to View. EF can be separated into different class library. Most of them use repository pattern along with any ORM incase it would be easy if they go for replacement. But I've seen criticism over using repository pattern, check this out.
Here is what I do:
Data:
Has one class inheriting from DbContext.
It has all the db sets.
Overrides OnModelCreating.
Mapping primary keys and relationships.
Entities:
Has every POCO classes.
Each property is decorated with needed data annotations.
Services:
Each service has common methods (GetList(), Find(), Create(), etc.).
Business:
Called from clients, orchestrate using services to perform a specific task UserChangePassword (this will check if this can be performed, then perform the task, or return error/unauthorized statuses among many others to make the client shows the correct information regarding the task. This on my case is where I log.
Clients (Desktop/Web/Wpf/etc).
I'm not saying this is the best approach, I'm just sharing what's been working for me.
I am new to Entity Framework 4.0, using it with C#, and currently experimenting with its features.
What I noticed is that, like with most similar ORMs, it relies on an Context object to deal with the data-manipulation and CRUD statements generation done on the generated entities.
This means that if I want to save the changes back to the database, I always need to be able to have access to a reference to the ObjectContext that has instanciated the entities.
It is fine and all if the context has been created in an accessable scope (same method, for example), but what if I pass an entity or and entity set to a method and want this method to save the changes? It looks like the only easy way is to pass the ObjectContext along with the parameters.
Another solution would be placing the ObjectContext in some sort of global variable.
Needless to say, I find styling and maintainability issues to both of these approaches.
In short, the best way I can imagine is getting a reference to the ObjectContext from the entity or entity set.
I know that it isn't possible by default.
I have found a method showing adding an extension method to get the ObjectContext from the entity. However, it only works for entities with relationships and calling this method is expensive according to the author.
I was thinking about maybe modifying the T4 template to add a Context property to all my entities and fill it automatically on entities' instanciation.
I have already modified T4 template once to have Entity Framework enforce Max Length on my generated classes (by following Julie Lerman's Programming Entity Framework 4 book).
I can't say I really enjoy the T4 syntax so far, but if that's the best/only way, so be it...
Has anyone already done so and what would be the best way to handle this and willing to share his T4 template or explain what are the best partial methods or events to hook into to get this done?
Is there any major downside in using such an approach?
I am thinking that having so many references to the ObjectContext may hinder/delay its ability to be recollected by the GC if some of my entities remain in scope but I actually have no use anymore for the ObjectContext.
Many thanks.
If you need to pass object context as parameter together with your entities you are doing something wrong.
Usually context is needed only in well defined layer. All classes from this layer which requires context to their logic can receive the context through some specialized class - context provider (it can also be called service locator). Context provider will hold current context instance in some storage - you can create your own or you can store it per thread, per http request, etc.
If they need more then one context instance in your classes you can modify your provider to work also as factory.
Another common approach is combined with dependency injection. You will pass the context to your classes through the constructor (or property) and you will have some bootstraper code which will do all necessary initialization for you (create required instances and pass all dependencies into them). Again you can pass a context or a factory. This is usually used together with IoC containers which will do the plumbing for you.
Once you have this infrastructure prepared you can pass your entity to the any initialized class from that layer and it will have the context available.