I've the following service operation in my ICustomerService:
public void RegisterCustomer(Customer customer)
{
Check.NotNull(customer, "customer");
//do another domain specific things...
customerRepository.Save(customer);
}
Edit
Customer class has an reference to ICollection<> of CustomerAddress entity.
This operation have to save customer address list too.
I know that do cascade updates does not is a good thing in this scenario:
How should I handle persistence for referenced entities?
From the DDD perspective, how should i do in this case?
Should i ask customer address list to the service operation through parameter?
I know that do cascade updates does not is a good thing in this
scenario:
Why ? As long as CustomerAddress is a simple entity and not an Aggregate Root, you have everything to gain by letting EF persist them along with the Customer.
Judging by your other question too, I think you may miss the Aggregate Root vs Entity distinction. This is where you should start -- design your aggregates, decide which objects should be AR's, simple Entities and Value Objects.
From there everything should fall into place according to some simple rules : one Repository per AR, Entities can only have references to Entities from the same Aggregate, it's better if an AR references another AR by its ID only, and VO's can be referenced from anywhere.
If you ask CustomerPhone, you can break (or not? it depends) invariant of Customer object. One of approaches is use Memento pattern. Extract internal state of your Customer to Memento object and pass this Memento to repository.
Related
I am trying to grabs the idea of the pattern repository and trying to get it implemented in database structures I've already set up in the past. I'm now trying to get the best practice to work with my lookup tables. I've created a test project to play around and this is my database model:
You can see that I have three tables for the lookups: Lookup, Language and LookupLanguage. Language table simply contains the languages.
Lookup tables holds the different types used throughout the models.
And LookupLanguage links the both tables together:
I've created anew project with all the models 1 to 1 to the database tables:
I also created a generic repository and a generic CrudService interface:
public interface ICrudService<T> where T : IsActiveEntity, new()
{
int Create(T item);
void Save();
void Delete(int id);
T Get(int id);
IEnumerable<T> GetAll();
IEnumerable<T> Where(Expression<Func<T, bool>> func, bool showDeleted = false);
void Restore(int id);
}
Now, according to the following post: When implementing the repository pattern should lookup value / tables get their own Repository? , the repository should hide the underlying database layer. So I think I need a new implementation of a service and/or repository to get the lookups, but then, where do I have to tell in which language I need to have the lookup?
Let's take the status (new, accepted, refused) from the company as an example.
The company model is as follow:
public partial class Company : IsActiveEntity
{
[Required]
[MaxLength(50)]
public string CompanyName { get; set; }
public System.Guid StatusGuid { get; set; }
[ForeignKey("StatusGuid")]
public virtual Lookup Status { get; set; }
}
I guess I don't need to have a separate implementation of a repository?
But I need a separate implementation CompanyService.
interface ICompanyService : ICrudService<Company>
{
IQueryable<LookupLanguage> GetStatuses(Guid languageguid);
LookupLanguage GetStatus(Guid statusguid, Guid languageguid);
}
Is this the correct approach, or do I miss something here?
Creating a Generic LookupRepository in your case in a better option because of your table schema and maintainence perspective.
I'm not sure whether you are using both Service Locator and Repository pattern or just Repository because of the name ICompanyService. But regardless, I agree that Repositories should not represent tables 1-1 always but they do most of the times.
The SO link you provided has a different table structure than yours. You have a generic lookup table vs the link has a separate table for each lookup. In the case where you have separate tables it makes sense to have the lookup repository method go with the entity repository since you will have a separate code to fetch the data for each lookup(as they have separate tables with different schema).
But in you case you have a single table that stores all the lookup types for each language and it makes sense to have a single LookupRepository that returns all the various types of lookups based on Language and LookupType. If you create each lookup method in separate entity repositories (like GetStatuses in CompanyRepository and GetStatuses in ContactRepository) you will have to repeat the logic in the method for each repository.
Think if you change the schema of the lookup table (say add a column) and you want to test all places the lookups are used it will be nightmare if you have lookup methods all over the place and pretty easy if you have one method in LookupRepository.
interface ILookupService : ICrudService<Lookup>
{
IQueryable<Lookup> GetStatuses(Guid languageguid, LookupType lookupType);
Lookup GetStatus(Guid statusguid, Guid languageguid, LookupType lookupType);
}
As regards your question, "Is this the correct approach" - this entirely depends on your specific needs.
What you have done doesn't seem to have any real issues. You have implemented the repository pattern using generics which is great. You are using interfaces for your repositories which allows for easier unit testing, also great!
One of your tags seems to indicate you are interested in the Entity Framework. You do not seem to be using that. The Entity Framework would simplify your code by creating the boiler plate classes for you. You can still use your repository pattern code with the classes created by the Entity Framework.
It seems that you are confusing the idea of a service and a repository. A repository is a general object which allows you to get data from a store without caring about the implementation. In your example, ICompanyService is a repository.
It is really controversial topic and there are different approaches to this problem. In our data logic we are not using repository pattern because we do not want to abstract most of the benefits of Entity Framework. Instead, we pass the context to the business logic which is already a combination of UoW / Repository pattern. Your approach is okay if you are going this way on all of your company services. However what I have seen so far, putting methods to the related services by their return values is the best approach to remind where they are. For instance if you want to get the company lookup, create a ILookupService and put GetLookUpsByCompany(int companyId) method to retrieve the company lookups.
I would argue with the linked response. Repositories ARE linked to database entities, considering the Entity Framework itself as a uow/repository implementation is a best example. On the other hand, services are for domain concerns and if there is a mismatch between your database entities and domain entities (you have two separate layers), services can help to glue the two.
In your specific case, you have repositories although you call them services. And you need a repository per database entity, that's just easier to implement and maintain. And also it helps to answer your question: yes, you need the extra repository for the linking table.
A small suggestion. You seem to have a generic query function that only accepts where clauses
IEnumerable<T> Where(Expression<Func<T, bool>> func, bool showDeleted = false);
If you already follow this route that allows arbitrary filtering expressions (which itself is a little arguable as someone will point out that you can' possibly guarantee that all technically possible filters can be executed by the database engine), why don't you allow all possible queries, including ordering, paging, etc:
IQueryable<T> Query { get; }
This is as easy to implement as your version (you just expose the dbset) but allows clients to perform more complicated queries, with the same possible concern that such contract is possibly too broad.
Localization is a presentation layer thing. The lower layers of your application should bother with it as little as possible.
I see two different kind of lookups: translations of coded concepts (Mr/Miss/Mrs) and translations of entity properties (company name maybe, or job titles or product names).
Coded concepts
I would not use lookup tables for coded concepts. There is no need to bother the lower layers at all with this. You will only need to translate them once for the entire application and create simple resource files that contain the translations.
But if you do wish to keep the translations in the database, a separate lookup repository for the codes or even per code system will sort of replace the resource file and serve you fine.
Entity properties
I can imagine different/nastier localization issues when certain entities have one or more properties that get translated in different languages. Then, the translation becomes part of the entity. I'd want the repository to cough up entity objects that contain all translations of the description, in a dictionary or so. Cause the business layer should not worry about language when querying, caching and updating relations. It should not ask the company repository for the Dutch version of company X. It should simply ask for company X and be served a Company object that contains its name in Dutch, English and French.
I've one more remark about the actual database implementation:
I think the lookup tables are distracting from the actual entities, to the point where you have forgotten to create a relation between person and person company. ;) I'd suggest putting all translations of entity properties in a single XML type column instead.
This illustrates why the repository should handle entities plus translations. If you were to make this storage layer level implementation change at some point, i.e. go from lookup tables to xml columns, the repository interfaces should remain the same.
I am learning DDD development for few days, and i start to like it.
I (think i) understand the principle of DDD, where your main focus is on business objects, where you have aggregates, aggregates roots, repositories just for aggregates roots and so on.
I am trying to create a simple project where i combine DDD development with Code First approach.
My questions are: (I am using asp.net MVC)
DDD Business Objects will be different than Code First objects?
Even if they will probably be the same, for example i can have a Product business object which has all the rules and methods, and i can have a Product code first (POCO) object which will just contain the properties i need to save in database.
If answer to question 1 is "true", then how do i notify the Product POCO object that a property from business object Product has been changed and i have to update it? I am using an "AutoMapper" or something like this?
If the answer is "no", i am completely lost.
Can you show me the most simple (CRUD) example of how can i put those two together?
Thank you
Update I no longer advocate for the use of "domain objects" and instead advocate a use of a messaging-based domain model. See here for an example.
The answer to #1 is it depends. In any enterprise application, you're going to find 2 major categories of stuff in the domain:
Straight CRUD
There's no need for a domain object here because the next state of the object doesn't depend on the previous state of the object. It's all data and no behavior. In this case, it's ok to use the same class (i.e. an EF POCO) everywhere: editing, persisting, displaying.
An example of this is saving a billing address on an order:
public class BillingAddress {
public Guid OrderId;
public string StreetLine1;
// etc.
}
On the other hand, we have...
State Machines
You need to have separate objects for domain behavior and state persistence (and a repository to do the work). The public interface on the domain object should almost always be all void methods and no public getters. An example of this would be order status:
public class Order { // this is the domain object
private Guid _id;
private Status _status;
// note the behavior here - we throw an exception if it's not a valid state transition
public void Cancel() {
if (_status == Status.Shipped)
throw new InvalidOperationException("Can't cancel order after shipping.")
_status = Status.Cancelled;
}
// etc...
}
public class Data.Order { // this is the persistence (EF) class
public Guid Id;
public Status Status;
}
public interface IOrderRepository {
// The implementation of this will:
// 1. Load the EF class if it exists or new it up with the ID if it doesn't
// 2. Map the domain class to the EF class
// 3. Save the EF class to the DbContext.
void Save(Order order);
}
The answer to #2 is that the DbContext will automatically track changes to EF classes.
The answer is No. One of the best things about EF code-first is that it fits nicely with DDD since you have to create your business objects by hand so do use your EF models to be equivalent to DDD entities and value objects. No need to add an extra layer of complexity, I don't think DDD recommends that anywhere.
You could even have your entities to implement an IEntity and you value objects to implement IValue, additionally follow the rest of DDD patterns namely Repositories to do the actual communication to the database. More of these ideas you can find this very good sample application in .NET, even though it doesn't use EF code first, it's still very valuable: http://code.google.com/p/ndddsample/
Recently I've done similar project. I was following this tutorial: link
And I've done it this way: I've created Blank solution, added projects: Domain, Service and WebUI.
Simply said in domain I've put model (for example classes for EF code first, methods etc.)
Service was used for domain to communicate with world (WebUI, MobileUI, other sites etc.) using asp.net webapi
WebUi was actually MVC application (but model was in domain so it was mostly VC)
Hope I've helped
The Pluralsight course: Entity Framework in the Enterprise goes into this exact scenario of Domain Driven Design incorporated with EF Code First.
For number 1, I believe you can do it either way. It's just a matter of style.
For number 2, the instructor in the video goes through a couple ways to account for this. One way is to have a "State" property on every class that is set on the client-side when modifying a value. The DbContext then knows what changes to persist.
Late question on this topic.
Reading Josh Kodroff's answer confirms my thoughts about the implementation of a Repository to, for instance, Entity Framework DAL.
You map the domain object to an EF persistance object and let EF handle it when saving.
When retrieving, you let EF fetch from database and map it to your domain object(aggregate root) and adds it to your collection.
Is this the correct strategy for repository implementation?
I have read this once:
"Don't leave entities as bags of getters and setters and put their methods in another layer unless you have a good reason to"
My customer, order, ... objects just get the data from the SqlDataReaders. They have only getter and setter.
My first question is which design approach does this follow when someone implements methods in entities AND what are these methods doing?
This way of thinking comes from the Domain Driven Design community.
In DDD you create a Domain Model that captures the functionality that your users request. You design your entities as having functionality and the data they need for it. You group them together in aggregates and you have separate classes that are responsible for construction (Factories) and querying (Repositories).
If you only have getters/setters you have an 'Anemic Domain Model'. Martin Fowler wrote about it in this article.
The problem with an Anemic Domain model is that you have the overhead of mapping your database to objects but not the benefits of it. If you don't use your entities as a real domain model, why don't you just use a DataTable or something for your data and keep your business logic in separate functions? An Anemic Domain model is an anti-pattern that should be avoided.
You also mention that you map the entities yourself. This blog explains why using an Object-Relational Mapping tool can really help. If you use Entity Framework with a Code First approach you can write a clean domain model with both data and functionality and map it to your database without much hassle. Then you will have the best of both worlds.
When you have methods as part of your model, you should only include model specific type of logic. For example, consider a bank account:
public class Account {
public AccountId Id { get; set; }
public Person Customer {get; set; }
public void Credit(Money amount) { ... }
public void Debit(Money amount) { ... }
}
Credit and Debit are model-specific logic (you won't find them anywhere else in the application), and should be encapsulated in the Account class.
You also mentioned that you used SqlDataReader within your model classes to get the data from the database. This is a big anti-pattern. Here are some problems you will encounter with this:
Violating Single Responsibility Principle - The model is now in-charge of representing the data and getting the data from the db.
How about querying children in your model? It gets messy.
You won't be able to change your data-access as easily.
Keep the model lean. Put the data access logic in a repository, i.e. AccountRepository.
I am using repository pattern in a .NET C# application that does not use an ORM. However the issue I am having is how to fill One-to-many List properties of an entity. e.g. if a customer has a list of orders i.e. if the Customer class has a List property called Orders and my repository has a method called GetCustomerById, then?
Should I load the Orders list within the GetCustomerById method?
What if the Order itself has another list property and so on?
What if I want to do lazy loading? Where would I put the code to load the Orders property in customer? Inside the Orders property get{} accessor? But then I would have to inject repository into the domain entity? which I don't think is the right solution.
This also raises questions for Features like Change Tracking, Deleting etc? So i think the end result is can I do DDD without ORM ?
But right now I am only interested in lazy loading List properties in my domain entities? Any idea?
Nabeel
I am assuming this is a very common issue for anyone not using an ORM in a Domain Driven Design? Any idea?
can I do DDD without ORM ?
Yes, but an ORM simplifies things.
To be honest I think your problem isn't to do with whether you need an ORM or not - it's that you are thinking too much about the data rather than behaviour which is the key for success with DDD. In terms of the data model, most entities will have associations to most another entities in some form, and from this perspective you could traverse all around the model. This is what it looks like with your customer and orders and perhaps why you think you need lazy loading. But you need to use aggregates to break these relationships up into behavioural groups.
For example why have you modelled the customer aggregate to have a list of order? If the answer is "because a customer can have orders" then I'm not sure you're in the mindset of DDD.
What behaviour is there that requires the customer to have a list of orders? When you give more thought to the behaviour of your domain (i.e. what data is required at what point) you can model your aggregates based around use cases and things become much clearer and much easier as you are only change tracking for a small set of objects in the aggregate boundary.
I suspect that Customer should be a separate aggregate without a list of orders, and Order should be an aggregate with a list of order lines. If you need to perform operations on each order for a customer then use orderRepository.GetOrdersForCustomer(customerID); make your changes then use orderRespository.Save(order);
Regarding change tracking without an ORM there are various ways you can do this, for example the order aggregate could raise events that the order repository is listening to for deleted order lines. These could then be deleted when the unit of work completed. Or a slightly less elegant way is to maintain deleted lists, i.e. order.DeletedOrderLines which your repository can obviously read.
To Summarise:
I think you need to think more about behaviour than data
ORM's make life easier for change tracking, but you can do it without one and you can definitely do DDD without one.
EDIT in response to comment:
I don't think I'd implement lazy loading for order lines. What operations are you likely to perform on the order without needing the order lines? Not many I suspect.
However, I'm not one to be confined to the 'rules' of DDD when it doesn't seem to make sense, so... If in the unlikely scenario that there are a number of operations performed on the order object that didn't require the order lines to be populated AND there are often a large number of order lines associated to an order (both would have to be true for me to consider it an issue) then I'd do this:
Have this private field in the order object:
private Func<Guid, IList<OrderLine>> _lazilyGetOrderLines;
Which would be passed by the order repository to the order on creation:
Order order = new Order(this.GetOrderLines);
Where this is a private method on the OrderRepository:
private IList<OrderLine> GetOrderLines(Guid orderId)
{
//DAL Code here
}
Then in the order lines property could look like:
public IEnumberable<OrderLine> OrderLines
{
get
{
if (_orderLines == null)
_orderLines = _lazilyGetOrderLines(this.OrderId);
return _orderLines;
}
}
Edit 2
I've found this blog post which has a similar solution to mine but slightly more elegant:
http://thinkbeforecoding.com/post/2009/02/07/Lazy-load-and-persistence-ignorance
1) Should I load the Orders list within the GetCustomerById method?
It's probably a good idea to separate the order mapping code from the customer mapping code. If you're writing your data access code by hand, calling that mapping module from the GetCustomerById method is your best option.
2) What if the Order itself has another list property and so on?
The logic to put all those together has to live somewhere; the related aggregate repository is as good a place as any.
3) What if I want to do lazy loading? Where would I put the code to load the Orders property in customer? Inside the Orders property get{} accessor? But then I would have to inject repository into the domain entity? which I don't think is the right solution.
The best solution I've seen is to make your repository return subclassed domain entities (using something like Castle DynamicProxy) - that lets you maintain persistence ignorance in your domain model.
Another possible answer is to create a new Proxy object that inherits from Customer, call it CustomerProxy, and handle the lazy load there. All this is pseudo-code, so it's to give you an idea, not just copy and paste it for use.
Example:
public class Customer
{
public id {get; set;}
public name {get; set;}
etc...
public virtual IList<Order> Orders {get; protected set;}
}
here is the Customer "proxy" class... this class does not live in the business layer, but in the Data Layer along with your Context and Data Mappers. Note that any collections you want to make lazy-load you should declare as virtual (I believe EF 4.0 also requires you to make props virtual, as if spins up proxy classes at runtime on pure POCO's so the Context can keep track of changes)
internal sealed class CustomerProxy : Customer
{
private bool _ordersLoaded = false;
public override IList<Order> Orders
{
get
{
IList<Order> orders = new List<Order>();
if (!_ordersLoaded)
{
//assuming you are using mappers to translate entities to db and back
//mappers also live in the data layer
CustomerDataMapper mapper = new CustomerDataMapper();
orders = mapper.GetOrdersByCustomerID(this.ID);
_ordersLoaded = true;
// Cache Cases for later use of the instance
base.Orders = orders;
}
else
{
orders = base.Orders;
}
return orders;
}
}
}
So, in this case, our entity object, Customer is still free from database/datamapper code calls, which is what we want... "pure" POCO's. You've delegated the lazy-load to the proxy object which lives in the Data layer, and does instantiate data mappers and make calls.
there is one drawback to this approach, which is calling client code can't override the lazy load... it's either on or off. So it's up to you in your particular usage circumstance. If you know maybe 75% of the time you'll always needs the Orders of a Customer, than lazy-load is probably not the best bet. It would be better for your CustomerDataMapper to populate that collection at the time you get a Customer entity.
Again, I think NHibernate and EF 4.0 both allow you to change lazy-loading characteristics at runtime, so, as per usual, it makes sense to use an ORM, b/c a lot of functionality is provided for you.
If you don't use Orders that often, then use a lazy-load to populate the Orders collection.
I hope that this is "right", and is a way of accomplishing lazy-load the correct way for Domain Model designs. I'm still a newbie at this stuff...
Mike
Just need to clarify this one, If I have the below interface
public interface IRepository<T>
{
T Add(T entity);
}
when implementing it, does checking for duplication if entity is already existing before persist it is still a job of the Repository, or it should handle some where else?
Yes - I recommend doing these checks in the repository.
Long answer: The term "repository" is a bit vague, but it is used more and more as the name of the persistence abstraction layer. The name is nice, but does not say too much: If you take Asp.Net MVC as an example, the sample apps, like Neirds dinner and alike, or codeplex projects encapsulate data access by the repository class. If such layer is implemented with a relational database for instancce, the primary keys of the tables will not allow duplicate entries, which means that in this case the repository implementation will throw an exception if 2 entries with the same key are inserted. So in other words, a RDBMS-implementation of a repository will quite always due this check, you wont be able to avoid it. So to make the behavior of repostories out there in the world most similar and to avoid surprises, lets all of them do this check.
It is a second question whether you should maintain in the business logic already that your Add() method is not alled with an entry that already exists. Sometimes it makes good sense to resolve this only at a single point, the database for instance, due to concurrency issues or savings of roundtrips. On the other hand it is for instance nice to tell the user as soon as possible that a username is already taken. So this depends.
have a nice day
If the entity already exists, you can either throw an exception, or update the existing entity's fields.
If you choose the latter, the method should probably be called something like AddOrUpdate()
Linq to SQL example
If I am retrieving a single record, I will use
public Entity GetEntity(int entityID)
{
return dataContext.Entities.SingleOrDefault(e => e.EntityID = entityID);
}
...And in the calling method, I will check to see if what is returned is null before attempting to use the returned entity.
If I am updating a record, I will retrieve the entity as shown, edit the entity, and then call an UpdateEntity(entityID) repository method to update the fields in the database.
If I am adding a record, it's even easier. Since this is a database, and my tables always contain an Identity field of type int (an auto-assignable number, essentially), adding a record is the simplest operation of all (it's always a new record):
Public void InsertEntity(Entity entity)
{
dataContext.Entities.InsertOnSubmit(entity);
dataContext.SubmitChanges();
}
Business rules (email addresses are unique, for example) can be handled in the repository, or in a separate business layer. If you are looking for the most "correct" way, I think most people would agree that business rules belong in their own Business Logic Layer.
Essentially the decision on where to handle that case depends on your exact requirements.
If you have business rules that define clear cut actions for when this happens, eg if a duplicate exists the new item should be renamed, then it can be built into the repository class.
On the other hand, if more complex rules are in place whereby, for example, more information is required to change the item before adding, then it should be handled further up the food chain.
The concept of a repository states that it exists to perform the persistence activities.
So if you can do it all within the repository, that's fine. If you find you start to reference outside the repository, or your repository has dependencies, eg calling another repository, or a service, or a manager (or whatever processor nomenclature you prefer), then it's a good sign to take it back a step.