.NET Domain Model, when to eager load - c#

I'm fairly new to the whole .NET scene and I'm still trying to figure this thing out.
One thing that seems to be very advocated for is the Domain Driven Design pattern. And eager as I am to get a flying start in the .NET world and doing it proper I dived in trying to apply it to my project.
As I understand it is bad practice to give a domain object access to persistence layer functions like repositories, but then I'm really struggling on how to get around the problem of eager loading when working with heavily connected graphs. It usually ends up with having pretty much no logic in the domain model and moving all of it to a service layer which does calculations and fills the model objects with data or returns the result directly, for example price = productService.CalculatePriceFor(product, user); instead of price = product.Price(user) since the latter cannot be accomplished without eager loading the whole productgroup tree and discount matrix when first requesting the object.
What is good practice here? Implement subclasses of product where the information for getting the user price is calculated at load time and have another subclass of it when I don't need the user price?

In proper DDD you don't have any heavily connected graphs. Vaughn Vernon's article about the aggregates design should help you catch the idea.
http://dddcommunity.org/library/vernon_2011/
Also consider if product and user belong to the same bounded context. I would say they don't. In different context there might be similar ideas implemented in different way, for example, in Pricing Context, the Client class might be a simple aggregate providing some rebates but no information about shipping, invoices etc. However, is Shipping Context there would be a similar user class being only an value object (with an address) within a Shipping aggregate. User class seems to be a part of Authentication Context.

Related

Where should I put query logic related to filtering data by role user in DDD architecture

I am following DDD architecture for my app. I have App layer, domain layer and data access layer(repository).
Let say I would have 3 roles in my app: admin, supervisor, agency. Every role should access data that assigned to itself.
So the problem is, should i put query logic in order to filter data by role in Repository like
var query = dataContext.Order.Where(...);
if(userRole = "admin")
query =.... filter by admin
If(usrRole = "supervisor")
query =....
return query.ToList();
I think the logic related to business logic and should put in domain layer. But I havent clear this one.
Can you guys clear this out for me?
The best explanation I've read so far is the one from Patterns, Principles And Practices Of Domain-driven Design, published by Wrox. The image below resembles the core idea.
All dependencies point inward, so the domain model depends on nothing else, and knows about nothing else. Therein it's pure, and can focus on what matters, the language of the domain.
The application layer (containing the application services) exposes an API of use cases and orchestrates the requests with the involved domain services. Therefore, the code in application services is procedural, while the code in the domain model is usually much richer. That is, if the domain is complex enough to warrant it.
But I digress, to answer your question, the application layer exposes interfaces for the infrastructure to implement (eg repository pattern). It's also the application layer that knows how to query the data (through the use of this interface), and filter it based on the role.
The domain model should only receive the filtered collection, and focus only on one thing, processing the data.
For completeness, DDD allows many architectures, as long as the domain has no dependencies. Though I find it easiest to grasp, thinking about it this way.
Repositories represent collections of aggregate roots. So when you want to retrieve some aggregate or a list of aggregates that you need to perform business operations on the repository is the place to go.
In your case I imagine you have some kind of User aggregate and I can think of methods like the following on your repository:
findById(UserId id)
findByRole(UserRole role)
While findById() will only return one aggregate and findByRole() returns a list of aggregates.
Always keep in mind to only return full aggregate objects from the corresponding repository and define the repository interface in the domain layer while putting the repository implementation into the infrastructure layer.
There might be reasons not to return a full aggregate from a repository like creating summaries or calculating just an amount of, e.g. the number of users with specific criteria. But keep in mind to only return immutable objects in that case which are meant to be read only. But these kind of information should be relevant for performing business operations only in most cases.

DDD aggregate roots

I have question regarding ddd and bounded contexts.
Suppose there are two bounded contexts. In the first one the aggregate root is Customer who is able to publish an advertisement on a webpage. I suppose that falls in his behavior, in turn he has a method of PublishAdvertisement().
But the second bounded context has Advertisement as aggregate. That imposes that Advertisement has a Customer property, due to its nature of belonging to a Customer.
Both Customer and Advertisement are unique in the system and database.
My question is:
Is it advisable to delegate the creation of Advertisement from Customer to a factory or dependency injection?
Edit:
I thank you for your answers and apologize for the lack of info.
Dependency injection:
I was wondering what is the best manner to resolve a given situation. The company has a Stock of Advert templates, if a template is in stock its good for use, if it's not, then it's rented to someone. The company has a plan on having more Stocks. If a Customer wants to make an advert in these templates he chooses a template and if its in stock all is good to go. Reading this as it is I assumed that there should be a service(domain) CheckAvailability(template), due to the nature of the service it does not fit in a specific aggregate because it uses several aggregates with validations and queries the database. In future when there would be more Stocks(some rented from other companies, maybe someone else's database), I was planing on using dependency injection to add these Stocks to the service without changing the implementation. Question is , does this seem as a good idea?
Bounded contexts:
In regards to bounded contexts and database. Yes, there is one database object and two contexts that use the same database object. Order has a reference to Customer, due to belonging to a Customer, looks something like this
Order()
Customer customer(get; private set;)
///other properties and methods
I would appreciate any additional information via link, video, book in terms of implications of having 2 contexts like these (Customer->Order___1:M) relate to the same database. Thank you.
Both Customer and Advertisement are unique in the system and database.
If that is the case, then having these concepts in two bounded contexts that use the same DB objects is a problem! The separation between two bounded contexts is a strong one, so they shouldn't communicate by changing the same DB object.
So I think you have some major design issues there. Try to fix them first by creating a model that corresponds to the real-world problem, discuss it with your domain experts.
And now to answer your main question:
Creating entities through factories is a good idea. The factory hides the (potentially complex) mechanism to create an entity and provide it with the required services. The factory receives these services through DI in the first place, and can forward them to the entity during instantiation.
Absolutely.
One thing is associating domain objects and another thing is working with them. An ad has some associated customer, and the customer and ad must be created in their respective domain layers (i.e. repository and service at least...).
This is separating concerns in the right way, since you don't want customers to be created where ads are also created and vice versa.
I guess you already know the single responsibility principle.
What are the customer related invariants enforced by Customer.PublishAdvertisement() ?
If there aren't any, you'll be better off moving that method to the
Advertisement aggregate root in the other BC, perhaps making it a constructor or to an AdvertisementFactory if the construction logic is complex. Just because the physical world user who creates an ad is a Customer doesn't automatically imply that their aggregate root should have that method. The ad creation process can stay in the Advertisement BC, with an Advertisement application service as the entry point.
If there are, then Customer could emit an
AdvertisementPublished event that the Advertisement BC subscribes
to. You should be aware though that if you follow the "aggregate as
consistency boundary" good practice, Customer can't be immediately
consistent with Advertisement which means that there can be a delay
and inconsistencies can be introduced between when the event is emitted and when the Advertisement is persisted and thus visible to other clients.
It is usually not an issue when you are creating a new AR, but keep in mind that the state of the Customer that checked the invariants and decided to create the Advertisement can change and the invariants be violated in the mean time, before Advertisement is persisted.
Obviously, given that the 2 BCs share a common database (which is probably not a good idea as #theDmi pointed out), you could decide to break that rule and make your transaction span across the 2 aggregates. Not necessarily that bad if you just persist a new Advertisement and not modify one that can potentially be accessed concurrently.
As far as dependency injection, I can't see the connection here -- what is the dependency to be injected ?

Best practice with AutoMaping ViewModel (DTO) to Model (Entity)

I've happily been using AutoMapper in a few projects and made use of .ReverseMap() when going from ViewModel to Model. I'd typically do the following:
// run at startup
// I'd customize the mapping if needed
Mapper.CreateMap<Model, ViewModel>().ReverseMap();
[HttpPost]
public ActionResult Create(SomeModel viewModel)
{
var data = Mapper.Map<Model>(viewModel);
_repo.Insert(data);
_uow.Save();
return View();
}
Then I find this article: http://lostechies.com/jimmybogard/2009/09/18/the-case-for-two-way-mapping-in-automapper/
I'm at a loose.
Is the article simply outdated or is there a better way?
A disclaimer first: There are all kinds of different domains and architectures and this answer may not apply to your design goals or architecture at all. You're of course free to use AutoMapper however you want. Also, I did not write the article, so I'm using my own experience with certain types of projects to come up with an answer.
The importance of the domain layer
First of all, the article in question assumes you are using some version of domain driven design. At the very least, I'd say it appeals to the idea of a domain that's a very important part of the project and should be "protected." The bit that best sums this idea up is:
...Because then our mapping layer would influence our domain model.
The author did not want artifacts outside of the domain layer updating the domain.
Why? Because:
The domain model is the heart of the project.
Modification to the domain layer should be a serious operation--the most important parts handled by the domain layer itself.
The article mentions a few problems with allowing the mapping piece of the solution to do domain model updates, including:
Force mutable, public collection , like public EntitySet<Category> Categories { get; } <- NO.
You might wonder why having a mutable, public collection is a bad thing--from a domain model perspective you probably don't want some external service blasting in (possibly invalid) categories whenever it wants.
A more sensible API for adding Categories in this case would be:
have AddCategory and RemoveCategory methods that the entity itself does some validation on before adding.
Expose an IEnumerable<Category> { get; } that could never be modified by outside consumers.
A project I worked on recently had a quite complex domain. Domain entities frequently were only updated after prospective update operations were run through several validation services (living in the domain).
We would have introduced a huge design problem had we allowed mapping back to domain entities.
Allowing AutoMapper (or another mapping project) to map directly to our domain entities would be subverting the logic in the domain entities (or more likely domain services that performed validation) and creating a kind of "back door" into the domain layer.
Alternative
Hopefully the commentary above provided a little help. Unfortunately the alternative when you're not doing automatic mapping is to use plain ol' =. At the very least, though, if you're in a DDD environment you'll be forced to think a little more about what should happen before a domain entity is created or updated.
However
... The .ReverseMap method exists. I think this article still rings true for a certain type of project. The built-in ability to automatically create a two-way mapping means that the library is able to handle applications beyond the target application.
As stated in the disclaimer, two-way mappings might make total sense for your application.

Pattern for retrieving complex object graphs with Repository Pattern with Entity Framework

We have an ASP.NET MVC site that uses Entity Framework abstractions with Repository and UnitOfWork patterns. What I'm wondering is how others have implemented navigation of complex object graphs with these patterns. Let me give an example from one of our controllers:
var model = new EligibilityViewModel
{
Country = person.Pathway.Country.Name,
Pathway = person.Pathway.Name,
Answers = person.Answers.ToList(),
ScoreResult = new ScoreResult(person.Score.Value),
DpaText = person.Pathway.Country.Legal.DPA.Description,
DpaQuestions = person.Pathway.Country.Legal.DPA.Questions,
Terms = person.Pathway.Country.Legal.Terms,
HowHearAboutUsOptions = person.Pathway.Referrers
};
It's a registration process and pretty much everything hangs off the POCO class Person. In this case we're caching the person through the registration process. I've now started implementing the latter part of the registration process which requires access to data deeper in the object graph. Specifically DPA data which hangs off Legal inside Country.
The code above is just mapping out the model information into a simpler format for the ViewModel. My question is do you consider this fairly deep navigation of the graph good practice or would you abstract out the retrieval of the objects further down the graph into repositories?
In my opinion, the important question here is - have you disabled LazyLoading?
If you haven't done anything, then it's on by default.
So when you do Person.Pathway.Country, you will be invoking another call to the database server (unless you're doing eager loading, which i'll speak about in a moment). Given you're using the Repository pattern - this is a big no-no. Controllers should not cause direct calls to the database server.
Once a C ontroller has received the information from the M odel, it should be ready to do projection (if necessary), and pass onto the V iew, not go back to the M odel.
This is why in our implementation (we also use repository, ef4, and unit of work), we disable Lazy Loading, and allow the pass through of the navigational properties via our service layer (a series of "Include" statements, made sweeter by enumerations and extension methods).
We then eager-load these properties as the Controllers require them. But the important thing is, the Controller must explicitly request them.
Which basically tells the UI - "Hey, you're only getting the core information about this entity. If you want anything else, ask for it".
We also have a Service Layer mediating between the controllers and the repository (our repositories return IQueryable<T>). This allows the repository to get out of the business of handling complex associations. The eager loading is done at the service layer (as well as things like paging).
The benefit of the service layer is simple - more loose coupling. The Repository handles only Add, Remove, Find (which returns IQueryable), Unit of Work handles "newing" of DC's, and Commiting of changes, Service layer handles materialization of entities into concrete collections.
It's a nice, 1-1 stack-like approach:
personService.FindSingle(1, "Addresses") // Controller calls service
|
--- Person FindSingle(int id, string[] includes) // Service Interface
|
--- return personRepository.Find().WithIncludes(includes).WithId(id); // Service calls Repository, adds on "filter" extension methods
|
--- IQueryable<T> Find() // Repository
|
-- return db.Persons; // return's IQueryable of Persons (deferred exec)
We haven't got up to the MVC layer yet (we're doing TDD), but a service layer could be another place you could hydrate the core entities into ViewModels. And again - it would be up to the controller to decide how much information it wishes.
Again, it's all about loose coupling. Your controllers should be as simplistic as possible, and not have to worry about complex associations.
In terms of how many Repositories, this is a highly debated topic. Some like to have one per entity (overkill if you ask me), some like to group based on functionality (makes sense in terms of functionality, easier to work with), however we have one per aggregate root.
I can only guess on your Model that "Person" should be the only aggregate root i can see.
Therefore, it doesn't make much sense having another repository to handle "Pathways", when a pathway is always associated with a particular "Person". The Person repository should handle this.
Again - maybe if you screencapped your EDMX, we could give you more tips.
This answer might be extending out a little too far based on the scope of the question, but thought i'd give an in-depth answer, as we are dealing with this exact scenario right now.
HTH.
It depends on how much of the information you're using at any one time.
For example, if you just want to get the country name for a person (person.Pathway.Country.Name) what is the point in hydrating all of the other objects from the database?
When I only need a small part of the data I tend to just pull out what I'm going to use. In other words I will project into an anonymous type (or a specially made concrete type if I must have one).
It's not a good idea to pull out an entire object and everything related to that object every time you want to access some properties. What if you're doing this once every postback, or even multiple times? By doing this you might be making life easier in the short term at the cost of you're making your application less scalable long term.
As I stated at the start though, there isn't a one size fits all rule for this, but I'd say it's rare that you need to hydrate that much information.

N-layered database application without using an ORM, how does the UI specify what it needs of data to display?

I'm looking for pointers and information here, I'll make this CW since I suspect it has no single one correct answer. This is for C#, hence I'll make some references to Linq below. I also apologize for the long post. Let me summarize the question here, and then the full question follows.
Summary: In a UI/BLL/DAL/DB 4-layered application, how can changes to the user interface, to show more columns (say in a grid), avoid leaking through the business logic layer and into the data access layer, to get hold of the data to display (assuming it's already in the database).
Let's assume a layered application with 3(4) layers:
User Interface (UI)
Business Logic Layer (BLL)
Data Access Layer (DAL)
Database (DB; the 4th layer)
In this case, the DAL is responsible for constructing SQL statements and executing them against the database, returning data.
Is the only way to "correctly" construct such a layer to just always do "select *"? To me that's a big no-no, but let me explain why I'm wondering.
Let's say that I want, for my UI, to display all employees that have an active employment record. By "active" I mean that the employment records from-to dates contains today (or perhaps even a date I can set in the user interface).
In this case, let's say I want to send out an email to all of those people, so I have some code in the BLL that ensures I haven't already sent out email to the same people already, etc.
For the BLL, it needs minimal amounts of data. Perhaps it calls up the data access layer to get that list of active employees, and then a call to get a list of the emails it has sent out. Then it joins on those and constructs a new list. Perhaps this could be done with the help of the data access layer, this is not important.
What's important is that for the business layer, there's really not much data it needs. Perhaps it just needs the unique identifier for each employee, for both lists, to match upon, and then say "These are the unique identifiers of those that are active, that you haven't already sent out an email to". Do I then construct DAL code that constructs SQL statements that only retrieve what the business layer needs? Ie. just "SELECT id FROM employees WHERE ..."?
What do I do then for the user interface? For the user, it would perhaps be best to include a lot more information, depending on why I want to send out emails. For instance, I might want to include some rudimentary contact information, or the department they work for, or their managers name, etc., not to say that I at least name and email address information to show.
How does the UI get that data? Do I change the DAL to make sure I return enough data back to the UI? Do I change the BLL to make sure that it returns enough data for the UI? If the object or data structures returned from the DAL back to the BLL can be sent to the UI as well, perhaps the BLL doesn't need much of a change, but then requirements of the UI impacts a layer beyond what it should communicate with. And if the two worlds operate on different data structures, changes would probably have to be done to both.
And what then when the UI is changed, to help the user even further, by adding more columns, how deep would/should I have to go in order to change the UI? (assuming the data is present in the database already so no change is needed there.)
One suggestion that has come up is to use Linq-To-SQL and IQueryable, so that if the DAL, which deals with what (as in what types of data) and why (as in WHERE-clauses) returned IQueryables, the BLL could potentially return those up to the UI, which could then construct a Linq-query that would retrieve the data it needs. The user interface code could then pull in the columns it needs. This would work since with IQuerables, the UI would end up actually executing the query, and it could then use "select new { X, Y, Z }" to specify what it needs, and even join in other tables, if necessary.
This looks messy to me. That the UI executes the SQL code itself, even though it has been hidden behind a Linq frontend.
But, for this to happen, the BLL or the DAL should not be allowed to close the database connections, and in an IoC type of world, the DAL-service might get disposed of a bit sooner than the UI code would like, so that Linq query might just end up with the exception "Cannot access a disposed object".
So I'm looking for pointers. How far off are we? How are you dealing with this? I consider the fact that changes to the UI will leak through the BLL and into the DAL a very bad solution, but right now it doesn't look like we can do better.
Please tell me how stupid we are and prove me wrong?
And note that this is a legacy system. Changing the database schema isn't in the scope for years yet, so a solution to use ORM objects that would essentially do the equivalent of "select *" isn't really an option. We have some large tables that we'd like to avoid pulling up through the entire list of layers.
This is not at all an easy problem to solve. I have seen many attempts (including the IQueryable approach you describe), but none that are perfect. Unfortunately we are still waiting for the perfect solution. Until then, we will have to make do with imperfection.
I completely agree that DAL concerns should not be allowed to leak through to upper layers, so an insulating BLL is necessary.
Even if you don't have the luxury of redefining the data access technology in your current project, it still helps to think about the Domain Model in terms of Persistence Ignorance. A corrolary of Persistence Ignorance is that each Domain Object is a self-contained unit that has no notion of stuff like database columns. It is best to enforce data integretiy as invariants in such objects, but this also means that an instantiated Domain Object will have all its constituent data loaded. It's an either-or proposition, so the key becomes to find a good Domain Model that ensures that each Domain Object hold (and must be loaded with) an 'appropriate' amount of data.
Too granular objects may lead to chatty DAL interfaces, but too coarse-grained objects may lead to too much irrelevant data being loaded.
A very important exercise is to analyze and correctly model the Domain Model's Aggregates so that they strike the right balance. The book Domain-Driven Design contains some very illuminating analyses of modeling Aggregates.
Another strategy which can be helpful in this regard is to aim to apply the Hollywood Principle as much as possible. The main problem you describe concerns Queries, but if you can shift your focus to be more Command-oriented, you may be able to define some more coarse-grained interfaces that doesn't require you to always load too much data.
I'm not aware of any simple solution to this challenge. There are techniques like the ones I described above that can help you address some of the issues, but in the end it is still an art that takes experience, skill and discipline.
Use the concept of a view model (or data transfer objects) that are UI consumption cases. It will be the job of the BLL to take these objects and if the data is incomplete, request additional data (which we call model). Then the BLL can make correct decisions about what view models to return. Don't let your model (data) specifics permeate to the UI.
UI <-- (viewmodel) ---> BLL <-- (model) --> Peristence/Data layers
This decoupling lets to scale you application better. The persistence independence I think just naturally falls out of this approach, as construction and specification of the view models could done flexibly in the BLL by using linq2ql or another orm technology.

Categories

Resources