I have a real scenario that is a perfect Domain Model design.
It is a field that has multiple quadrants with different states on every quadrant.
So my aggregate root is the field.
Now i have one important question:
I want to have a persitant ignorat domain model, which i think makes sense. so where should i call the update on the repository methods? not in the domain model, right?
So how should the aggregate root child entities update in the database when there is no change tracking proxy of this objects and the repository should not be called in the entities?
Or do i misunderstand the domain model pattern?
is my question clear? :)
thank you in advance
best
laurin
So where should i call the update on the repository methods?
In a stereotypical DDD architecture the repository is usually called by an application service. An application service is a class which serves as a facade encapsulating your domain and implements domain uses cases by orchestrating domain objects, repositories and other services.
I'm not familiar with your domain, but suppose there is a use case which shifts a State from one Quadrant in a Field to another. As you stated, the Field is the AR. So you'd have a FieldApplicationService referencing a FieldRepository:
public class FieldApplicationService
{
readonly FieldRepository fieldRepository;
public void ShiftFieldState(int fieldId, string quadrant, string state)
{
// retrieve the Field AR
var field = this.fieldRepository.Get(fieldId);
if (field == null)
throw new Exception();
// invoke behavior on the Field AR.
field.ShiftState(quadrant, state);
// commit changes.
this.fieldRepository.Update(field);
}
}
The application service is itself very thin. It does not implement any domain logic; it only orchestrates and sets the stage for execution of domain logic which includes accessing the repository. All code dependant of your domain, such as the presentation layer or a service will invoke domain functionality through this application service.
The repository could be implemented in a variety of ways. It can be with an ORM such as NHibernate, in which case change tracking is built in and the usual approach is to commit all changes instead of calling an explicit update. NHibernate provides a Unit of Work as well allowing changes to multiple entities can be committed as one.
In your case, as you stated, there is no change tracking so an explicit call to update is needed and it is up to the repository implementation to handle this. If using SQL Server as the database, the Update method on the repository can simply send all properties of a Field to a stored procedure which will update the tables as needed.
The Aggregate Root (AR) is updated somwehere. Using a message driven architecture, that somewhere is a command handler, but let's say for general purpose that is a service. THe service gets the AR from a repository, calls the relevant methods then saves the AR back to repository.
The AR doesn't know about the repository, it is not its concern. The Repository then saves all the AR modficications as a unit of work (that is all or nothing). How the Repo does that, well, that depends on how you decided your persistence strategy.
If you're using Event Sourcing, then the AR generates events and the Repo will use those events to persist AR state. If you take a more common approach, that AR should have a state data somewhere exposed as a property perhaps. It's called the memento pattern. The repository persist that data in one commit.
Bu one thing is certain: NEVER think of the persistence details, when dealing with a Domain object. That is don't couple the Domain to an ORM or to some db specific stuff.
The "application code" should call the repository. How the application code is hosted is an infrastructure concern. Some examples of how the application code might be hosted are WCF service, as a Winforms/WPF application, or on a Web server.
The repository implementation is responsible for tracking changes to the aggregate root and its child entities as well as saving them back to the db.
Here is an example:
Domain Project
public DomainObject : AggregateRootBase //Implements IAggregateRoot
{
public void DoSomething() { }
}
public IDomainObjectRepository : IRepository<DomainObject>, IEnumerable
{
DomainObject this[object id] { get; set; }
void Add(DomainObject do);
void Remove(DomainObject do);
int IndexOf(DomainObject do);
object IDof(DomainObject do);
IEnumerator<DomainObject> GetEnumerator();
}
Implementation Project
public SqlDomainObjectRepository : List<DomainObjectDataModel>, IDomainObjectRepository
{
//TODO: Implement all of the members for IDomainObjectRepository
}
Application Project
public class MyApp
{
IDomainObjectRepository repository = //TODO: Initialize a concrete SqlDomainObjectRepository that loads what we need.
DomainObject do = repository[0]; //Get the one (or set) that we're working with.
do.DoSomething(); //Call some business logic that changes the state of the aggregate root.
repository[repository.IDof(do)] = do; //Save the domain object with all changes back to the db.
}
If you need to transactionalize changes to multiple aggregate roots so the changes are made on an all or nothing basis, then you should look into the Unit of Work pattern.
Hope this helps clarify things!
My solution is the aggregate root will raise some events to event handlers outside. Those event handlers will call repository to update the database. You will also need a ServiceBus to register and dispatch events. See my example:
public class Field: AggregateRoot
{
public UpdateField()
{
// do some business
// and trigger FieldUpdatedEvent with necessary parameters
....
// you can update some quadrants
// and trigger QuadrantsUpdatedEvent with necessary parameters
}
}
public class FieldEventHandlers: EventHandler
{
void Handle (FieldUpdatedEvent e)
{
repository.Update(e.Field);
}
}
public class QuadrantEventHandlers: EventHandler
{
void Handle (QuadrantsUpdatedEvent e)
{
repository.Update(e.Quadrant);
}
}
Related
I've been learning about CQRS/ES. Looking at small example projects I often see events mutating the entity state. For instance If we look at the Order aggregate root:
public class Order : AggregateRoot {
private void Apply(OrderLineAddedEvent #event) {
var existingLine = this.OrderLines.FirstOrDefault(
i => i.ProductId == #event.ProductId);
if(existingLine != null) {
existingLine.AddToQuantity(#event.Quantity);
return;
}
this.OrderLines.Add(new OrderLine(#event.ProductId, #event.ProductTitle, #event.PricePerUnit, #event.Quantity));
}
public ICollection<OrderLine> OrderLines { get; private set; }
public void AddOrderLine(/*parameters*/) {
this.Apply(new OrderLineAddedEvent(/*parameters*/));
}
public Order() {
this.OrderLines = new List<OrderLine>();
}
public Order(IEnumerable<IEvent> history) {
foreach(IEvent #event in history) {
this.ApplyChange(#event, false);
}
}
}
public abstract class AggregateRoot {
public Queue<IEvent> UncommittedEvents { get; protected set; }
protected abstract void Apply(IEvent #event);
public void CommitEvents() {
this.UncommittedEvents.Clear();
}
protected void ApplyChange(IEvent #event, Boolean isNew) {
Apply(#event);
if(isNew) this.UncommittedEvents.Enqueue(#event);
}
}
when OrderLineAddedEvent is applied it mutates Order by adding new order line. But I don't understand these things:
if it is a right approach how then the changes made are persisted?
Or should I publish the event somehow to a corresponding handler from Order? How do I implement this technically? Should I use a service bus to transmit events?
I am also still experimenting with ES so this is still somewhat of an opinion rather than any guidance :)
At some stage I came across this post by Jan Kronquist: http://www.jayway.com/2013/06/20/dont-publish-domain-events-return-them/
The gist of it is that event should be returned from the domain rather than being dispatched from within the domain. This really struck a chord with me.
If one were to take a more traditional approach where a normal persistence-oriented repository is used the Application Layer would handle transactions and repository access. The domain would simply be called to perform the behaviour.
Also, the domain should always stick to persistence ignorance. Having an aggregate root maintain a list of events always seemed somewhat odd to me and I definitely do not like having my ARs inheriting from some common base. It does not feel clean enough.
So putting this together using what you have:
public OrderLineAddedEvent AddOrderLine(/*parameters*/) {
return this.Apply(new OrderLineAddedEvent(/*parameters*/));
}
In my POC I have also not been using an IEvent marker interface but rather just an object.
Now the Application Layer is back in control of the persistence.
I have an experimental GitHub repository going:
https://github.com/Shuttle/shuttle-recall-core
https://github.com/Shuttle/shuttle-recall-sqlserver
I haven't had time to look at it for a while and I know I have already made some changes but you are welcome to have a look.
The basic idea is then that the Application Layer will use the EventStore/EventStream to manage the events for an aggregate in the same way that the Application Layer would use a Repository. The EventStream will be applied to the aggregate. All events returned from the domain behaviours would be added to the EventStream after which it is persisted again.
This keeps all the persistence-oriented bits out of the domain.
An Entity doesn't update itself magically. Something (usually a service) will invoke the update behaviour of the entity. So, the service uses the entity which will generate and apply the events, then the service will persist the entity via a repository, then it will get the new events from the entity and publish them.
An alternate method is when the Events Store itself does the publishing of the events.
Event Sourcing is about expressing an entity state as a stream of events, that's why the entity updates itself by generating and applying events, it needs to create/add the changes to the events stream. That stream is also what it's stored in the db aka the Event Store.
Lately I am splitting my entities into two objects.
First is what I call a Document object. This is mostly a state only, ORM class with all the configuration related with how the information is persisted.
Then I wrap that Document with an Entity object, which is basically a mutation service containing all the behaviour.
My entities are basically stateless objects except of course for the contained document, but in any case, I mostly avoid any exposure to the outside world.
I have a design problem with my poject that I don't know how to fix, I have a DAL Layer which holds Repositories and a Service Layer which holds "Processors". The role of processors is to provide access to DAL data and perform some validation and formatting logic.
My domain objects all have a reference to at least one object from the Service Layer (to retrieve the values of their properties from the repositories). However I face two cyclical dependencies. The first "cyclical dependency" comes from my design since I want my DAL to return domain objects - I mean that it is conceptual - and the second comes from my code.
A domain object is always dependent of at least one Service Object
The domain object retrieves his properties from the repositories by calling methods on the service
The methods of the service call the DAL
However - and there is the problem - when the DAL has finished his job, he has to return domain objects. But to create these objects he has to inject the required Service Object dependencies (As these dependencies are required by domain objects).
Therefore, my DAL Repositories have dependencies on Service Object.
And this results in a very clear cyclical dependency. I am confused about how I should handle this situation. Lastly I was thinking about letting my DAL return DTOs but it doesn't seem to be compatible with the onion architecture. Because the DTOs are defined in the Infrastructure, but the Core and the Service Layer should not know about Infrastucture.
Also, I'm not excited about changing the return types of all the methods of my repositories since I have hundreds of lines of code...
I would appreciate any kind of help, thanks !
UPDATE
Here is my code to make the situation more clear :
My Object (In the Core):
public class MyComplexClass1
{
MyComplexClass1 Property1 {get; set;}
MyComplexClass2 Property2 {get; set;}
private readonly IService MyService {get; set;}
public MyComplexClass1(IService MyService)
{
this.MyService = MyService;
this.Property1 = MyService.GetMyComplexClassList1();
.....
}
This is my Service Interface (In the Core)
public interface IService
{
MyComplexClass1 GetMyComplexClassList1();
...
}
This my Repository Interface (In the Core)
public interface IRepoComplexClass1
{
MyComplexClass1 GetMyComplexClassObject()
...
}
Now the Service Layer implements IService, and the DAL Layer Implements IRepoComplexClass1.
But my point is that in my repo, I need to construct my Domain Object
This is the Infrascruture Layer
using Core;
public Repo : IRepoComplexClass1
{
MyComplexClass1 GetMyComplexClassList1()
{
//Retrieve all the stuff...
//... And now it's time to convert the DTOs to Domain Objects
//I need to write
//DomainObject.Property1 = new MyComplexClass1(ID, Service);
//So my Repository has a dependency with my service and my service has a dependency with my repository, (Because my Service Methods, make use of the Repository). Then, Ninject is completely messed up.
}
I hope it's clearer now.
First of all, typically architectural guidance like the Onion Architecture and Domain Driven Design (DDD) do not fit all cases when designing a system. In fact, using these techniques is discouraged unless the domain has significant complexity to warrant the cost. So, the domain you are modelling is complex enough that it will not fit into a more simple pattern.
IMHO, both the Onion Architecture and DDD try to achieve the same thing. Namely, the ability to have a programmable (and perhaps easily portable) domain for complex logic that is devoid of all other concerns. That is why in Onion, for example, application, infrastructure, configuration and persistence concerns are at the edges.
So, in summary, the domain is just code. It can then utilize those cool design patterns to solve the complex problems at hand without worrying about anything else.
I really like the Onion articles because the picture of concentric barriers is different to the idea of a layered architecture.
In a layered architecture, it is easy to think vertically, up and down, through the layers. For example, you have a service on top which speaks the outside world (through DTOs or ViewModels), then the service calls the business logic, finally, the business logic calls down to some persistence layer to keep the state of the system.
However, the Onion Architecture describes a different way to think about it. You may still have a service at the top, but this is an application service. For example, a Controller in ASP.NET MVC knows about HTTP, application configuration settings and security sessions. But the job of the controller isn't just to defer work to lower (smarter) layers. The job is to as quickly as possible map from the application side to the domain side. So simply speaking, the Controller calls into the domain asking for a piece of complex logic to be executed, gets the result back, and then persists. The Controller is the glue that is holding things together (not the domain).
So, the domain is the centre of the business domain. And nothing else.
This is why some complain about ORM tools that need attributes on the domain entities. We want our domain completely clean of all concerns other than the problem at hand. So, plain old objects.
So, the domain does not speak directly to application services or repositories. In fact, nothing that the domain calls speaks to these things. The domain is the core, and therefore, the end of the execution stack.
So, for a very simple code example (adapted from the OP):
Repository:
// it is only infrastructure if it doesn't know about specific types directly
public Repository<T>
{
public T Find(int id)
{
// resolve the entity
return default(T);
}
}
Domain Entity:
public class MyComplexClass1
{
MyComplexClass1 Property1 {get; } // requred because cannot be set from outside
MyComplexClass2 Property2 {get; set;}
private readonly IService MyService {get; set;}
// no dependency injection frameworks!
public MyComplexClass1(MyComplexClass1 property1)
{
// actually using the constructor to define the required properties
// MyComplexClass1 is required and MyComplexClass2 is optional
this.Property1 = property1;
.....
}
public ComplexCalculationResult CrazyComplexCalculation(MyComplexClass3 complexity)
{
var theAnswer = 42;
return new ComplexCalculationResult(theAnswer);
}
}
Controller (Application Service):
public class TheController : Controller
{
private readonly IRepository<MyComplexClass1> complexClassRepository;
private readonly IRepository<ComplexResult> complexResultRepository;
// this can use IoC if needed, no probs
public TheController(IRepository<MyComplexClass1> complexClassRepository, IRepository<ComplexResult> complexResultRepository)
{
this.complexClassRepository = complexClassRepository;
this.complexResultRepository = complexResultRepository;
}
// I know about HTTP
public void Post(int id, int value)
{
var entity = this.complexClassRepository.Find(id);
var complex3 = new MyComplexClass3(value);
var result = entity.CrazyComplexCalculation(complex3);
this.complexResultRepository.Save(result);
}
}
Now, very quickly you will be thinking, "Woah, that Controller is doing too much". For example, how about if we need 50 values to construct MyComplexClass3. This is where the Onion Architecture is brilliant. There is a design pattern for that called Factory or Builder and without the constraints of application concerns or persistence concerns, you can implement it easily. So, you refactor into the domain these patterns (and they become your domain services).
In summary, nothing the domain calls knows about application or persistence concerns. It is the end, the core of the system.
Hope this makes sense, I wrote a little bit more than I intended. :)
I am trying to create a small personal project which uses EF to handle data access. My project architecture has a UI layer, a service layer, a business layer and a data access layer. The EF is contained within the DAL. I don't think it's right to then make reference to my DAL, from my UI. So I want to create custom classes for 'business objects' which is shared between all my layers.
Example: I have a User table. EF creates a User entity. I have a method to maybe GetListOfUsers(). That, in the presentation, shouldn't reply on a List, as the UI then has a direct link to the DAL. I need to maybe have a method exposed in the DAL to maybe be something like:
List<MyUserObject> GetListOfUsers();
That would then call my internal method which would GetListOfUsers which returns a list of user entities, and then transforms them into my MyUserObejcts, which is then passed back through the layers to my UI.
Is that correct design? I don't feel the UI, or business layer for that matter, should have any knowledge of the entity framework.
What this may mean, though, is maybe I need a 'Transformation layer' between my DAL and my Business layer, which transforms my entities into my custom objects?
Edit:
Here is an example of what I am doing:
I have a data access project, which will contain the Entity Framework. In this project, I will have a method to get me a list of states.
public class DataAccessor
{
taskerEntities te = new taskerEntities();
public List<StateObject> GetStates()
{
var transformer = new Transformer();
var items = (from s in te.r_state select s).ToList();
var states = new List<StateObject>();
foreach (var rState in items)
{
var s = transformer.State(rState);
states.Add(s);
}
return states;
}
}
My UI/Business/Service projects mustn't know about entity framework objects. It, instead, must know about my custom built State objects. So, I have a Shared Library project, containing my custom built objects:
namespace SharedLib
{
public class StateObject
{
public int stateId { get; set; }
public string description { get; set; }
public Boolean isDefault { get; set; }
}
}
So, my DAL get's the items into a list of Entity objects, and then I pass them through my transformation method, to make them into custom buily objects. The tranformation takes an EF object, and outputs a custom object.
public class Transformer
{
public StateObject State (r_state state)
{
var s = new StateObject
{
description = state.description,
isDefault = state.is_default,
stateId = state.state_id
};
return s;
}
}
This seems to work. But is it a valid pattern?
So, at some point, your UI will have to work with the data and business objects that you have. It's a fact of life. You could try to abstract farther, but that would only succeed in the interaction being deferred elsewhere.
I agree that business processes should stand alone from the UI. I also agree that your UI should not directly act with how you access your data. What have you suggested (something along the lines of "GetListOfUsers()") is known as the Repository Pattern.
The purpose of the repository pattern is to:
separate the logic that retrieves the data and maps it to the entity
model from the business logic that acts on the model. The business
logic should be agnostic to the type of data that comprises the data
source layer
My recommendation is to use the Repository Pattern to hide HOW you're accessing your data (and, allow a better separation of concerns) and just be concerned with the fact that you "just want a list of users" or you "just want to calculate the sum of all time sheets" or whatever it is that you want your application to actually focus on. Read the link for a more detailed description.
First, do you really need all that layers in your 'small personal project'?
Second, I think your suggested architecture is a bit unclear.
If I get you right, you want to decouple your UI from your DAL. For that purpose you can for example extract interface of MyUserObject (defined in DAL obviously) class, lets call it IMyUserObject, and instead of referencing DAL from UI, reference some abstract project, where all types are data-agnostic. Also, I suggest that you'd have some service layer which would provide your presentation layer (UI) with concrete objects. If you utilize MVC you can have a link to services from your controller class. Service layer in turn can use Repository or some other technique to deal with DAL (it depends on the complexity you choose)
Considering transformation layer, I think people deal with mapping from one type to another when they have one simple model (DTO) to communicate with DB, another one - domain model, that deals with all the subtleties of business logic, and another one - presentational model, that is suited best to let user interact with. Such layering separates concerns to good measure, making each task simpler, but making app more complicated in general.
So you may end having MyUserObjectDTO, MyUserObject and MyUserObjectView and some mapping or transformation btw them.
I have a program that deals with (virtual) money order transactions, and there are many classes that has logic requiring data to be saved to a database. My question is if I shall call my repositories directly from the various classes, or if I shall instead raise events, which a DatabaseManager class object can listen to, and hence all repositories will be called from this one class.
I don't have experience working with databases and repositories, so would appreciate some deeper insight and tips here. Like on what criteria you would chose different approach etc.
It's probably important to note that the database in this case is not used to retrieve data for performing program logic, except on program startup. So it's basically keeping all data in runtime objects, and just dumping to database for archiving.
I would pass an IRepository to my classes which they can then call to save data. For one thing it makes testing easier because you can easily inject a mock repository and on the other hand it makes it explicit that your classes have a dependency like that. You might want to search for the term Dependency Injection.
Simple example:
class Account
{
public Account(IRepository<Account> repository)
{
_Repository = repository;
}
public void ChangeOwner(Owner newOwner)
{
// change ownership
_Repository.Save(this);
}
}
I've commonly seen examples like this on business objects:
public void Save()
{
if(this.id > 0)
{
ThingyRepository.UpdateThingy(this);
}
else
{
int id = 0;
ThingyRepository.AddThingy(this, out id);
this.id = id;
}
}
So why here, on the business object? This seems like contextual or data related more so than business logic.
For example, a consumer of this object might go through something like this...
...Get form values from a web app...
Thingy thingy = Thingy.CreateNew(Form["name"].Value, Form["gadget"].Value, Form["process"].Value);
thingy.Save();
Or, something like this for an update...
... Get form values from a web app...
Thingy thingy = Thingy.GetThingyByID(Int32.Parse(Form["id"].Value));
Thingy.Name = Form["name"].Value;
Thingy.Save();
So why is this? Why not contain actual business logic such as calculations, business specific rules, etc., and avoid retrieval/persistence?
Using this approach, the code might look like this:
... Get form values from a web app...
Thingy thingy = Thingy.CreateNew(Form["name"].Value, Form["gadget"].Value, Form["process"].Value);
ThingyRepository.AddThingy(ref thingy, out id);
Or, something like this for an update...
... get form values from a web app ...
Thingy thingy = ThingyRepository.GetThingyByID(Int32.Parse(Form["id"].Value));
thingy.Name = Form["Name"].Value;
ThingyRepository.UpdateThingy(ref thingy);
In both of these examples, the consumer, who knows best what is being done to the object, calls the repository and either requests an ADD or an UPDATE. The object remains DUMB in that context, but still provides it's core business logic as pertains to itself, not how it is retrieved or persisted.
In short, I am not seeing the benefit of consolidating the GET and SAVE methods within the business object itself.
Should I just stop complaining and conform, or am I missing something?
This leads into the Active Record pattern (see P of EAA p. 160).
Personally I am not a fan. Tightly coupling business objects and persistence mechanisms so that changing the persistence mechanism requires a change in the business object? Mixing data layer with domain layer? Violating the single responsibility principle? If my business object is Account then I have the instance method Account.Save but to find an account I have the static method Account.Find? Yucky.
That said, it has its uses. For small projects with objects that directly conform to the database schema and have simple domain logic and aren't concerned with ease of testing, refactoring, dependency injection, open/closed, separation of concerns, etc., it can be a fine choice.
Your domain objects should have no reference to persistance concerns.
Create a repository interface in the domain that will represent a persistance service, and implement it outside the domain (you can implement it in a separate assembly).
This way your aggregate root doesn't need to reference the repository (since it's an aggregate root, it should already have everyting it needs), and it will be free of any dependency or persistance concern. Hence easier to test, and domain focused.
While I have no understanding of DDD, it makes sense to have 1 method (which will do UPSERT. Insert if record doesn't exist, Update otherwise).
User of the class can act dumb and call Save on an existing record and Update on a new record.
Having one point of action is much clearer.
EDIT: The decision of whether to do an INSERT or UPDATE is better left to the repository. User can call Repository.Save(....), which can result in a new record (if record is not already in DB) or an update.
If you don't like their approach make your own. Personally Save() instance methods on business objects smell really good to me. One less class name I need to remember. However, I don't have a problem with a factory save but I don't see why it would be so difficult to have both. IE
class myObject
{
public Save()
{
myObjFactory.Save(this);
}
}
...
class myObjectFactory
{
public void Save(myObject obj)
{
// Upsert myObject
}
}