I have an invoice object, which consists of items, and each item has a relation to service.
Following structure.
{
"invoiceId" : "dsr23343",
"items":{
"id":1,
"service":{
"serviceCode":"HTT"
}
}
}
One of my requirements is that the item should not have a relation to service which does not exist in our system.
From my understanding, domain objects should never enter in an invalid state.
So what I am doing is following:
var service = new Service("SomeService");
var item = new Item(service);
invoice.AddItem(item);
My question is, should i require AddItem function to receive Repository as second parameter, and throw exception if Service does not exist in database?
My question is, should i require AddItem function to receive Repository as second parameter, and throw exception if Service does not exist in database?
Short answer: sure, why not?
Longer answer...
If Service and Invoice are part of the same aggregate, then the repository is unnecessary -- just look at the state of the aggregate. So what follows assumes that there is a transaction boundary between the Invoice and the Service.
Using a Repository as the argument is a bit too much stuff -- Invoice doesn't need to load the Service, it just needs to know if the Service exists. So instead of putting a Repository in the method signature, you could use a DomainService that supports the "does this service exist?" query.
(The implementation of the DomainService probably does a lookup in the Repository -- we're not doing magic here, we're just isolating Invoice from implementation details it doesn't need to know about).
Using the more restrictive interface in the signature documents clearly what the integration contract is between these components.
That said, the requirement is very suspicious. If Service and Invoice are in different aggregates, then they potentially have different life cycles. What is supposed to happen when you try to load an invoice, that includes an item that references a service which no longer exists? Is that use case supposed to explode? if so, it's going to be hard to edit the invoice to fix the problem....
What if, while you are adding the item to the invoice, some other thread is deleting the service...?
Review Udi Dahan's essay: Race Conditions Don't Exist. Executive summary - if your model is sensitive to microsecond variations in timing, you probably aren't modelling your business.
You've got at least three other alternatives to protect this "invariant".
One is at the client level; if you don't let the client produce invalid service codes, then you aren't going to have this problem. Input validation belongs in the client component or in the application component, not so much the model. That is, it's the sort of thing that you might check when the application is constructing the ServiceCode from the DTO that traveled across the process boundary.
One is downstream of the model - if you can detect invoice items that reference service codes that are invalid, then you can broadcast an exception report, and use the contingency response process to manage the problem. Consistency issues that are rare, cheap to detect, easy to fix don't need tight validation in the domain model.
One is within the model itself - if creation of an invoice item is tightly coupled to the lifetime of a service, then maybe the item is created by the service, rather than by the invoice. For example
class Service {
reportUsage(Customer, TimePeriod)
}
Wouldn't be an unusual looking signature, and you can probably be confident that the Service raising a domain event is going to correctly report its own ServiceCode.
Related
What should we do in case when we have UI that's not task based with tasks corresponding to our entity methods which, in turn, correspond to ubiquitous language?
For example, lets say we have a domain model for WorkItem that has properties: StartDate, DueDate, AssignedToEmployeeId, WorkItemType, Title, Description, CreatedbyEmployeeId.
Now, some things can change with the WorkItem and broken down, it boils to methods like:
WorkItem.ReassignToAnotherEmployee(string employeeId)
WorkItem.Postpone(DateTime newDateTime)
WorkItem.ExtendDueDate(DateTime newDueDate)
WorkItem.Describe(string description)
But on our UI side there is just one form with fields corresponding to our properties and a single Save button. So, CRUD UI. Obviously, that leads to have a single CRUD REST API endpoint like PUT domain.com/workitems/{id}.
Question is: how to handle requests that come to this endpoint from the domain model perspective?
OPTION 1
Have CRUD like method WorkItem.Update(...)? (this, obviously, defeats the whole purpose of ubiquitous language and DDD)
OPTION 2
Application service that is called by endpoint controller have method WorkItemsService.Update(...) but within that service we call each one of the domain models methods that correspond to ubiquitous language? something like:
public class WorkItemService {
...
public Update(params) {
WorkItem item = _workItemRepository.get(params.workItemId);
//i am leaving out check for which properties actually changed
//as its not crucial for this example
item.ReassignToAnotherEmployee(params.employeeId);
item.Postpone(params.newDateTime);
item.ExtendDueDate(params.newDueDate);
item.Describe(params.description);
_workItemRepository.save(item);
}
}
Or maybe some third option?
Is there some rule of thumb here?
[UPDATE]
To be clear, question can be rephrased in a way: Should CRUD-like WorkItem.Update() ever become a part of our model even if our domain experts express it in a way we want to be able update a WorkItem or should we always avoid it and go for what does "update" actually mean for the business?
Is your domain/sub-domain inherently CRUD?
"if our domain experts express it in a way we want to be able update a
WorkItem"
If your sub-domain aligns well with CRUD you shouldn't try to force a domain model. CRUD is not an anti-pattern and can actually be the perfect fit for certain sub-domains. CRUD becomes problematic when business experts are expressing rich business processes that are wrongly translated to CRUD UIs & backends by developers, leading to code/UL misalignment.
Note that business processes can also be expensive to discover & model explicitly. Sometimes (e.g. lack of resources) it may be acceptable to let those live in the heads of domain experts. They will drive a simple CRUD UI from paper-based processes as opposed to having the system guide them. CRUD may be perfectly fine here since although processes are complex, we aren't trying to model them in the system which remains simple.
I can't tell whether or not your domain is inherently CRUD, but I just wanted to point out that if it is, then embrace it and go for simpler business logic patterns (Active Record, Transaction Script, etc.). If you find yourself constantly wanting to map every bit of data with a single method call then you may be in a CRUD domain.
Isolate corruption
If you settle that a domain model will benefit your model, then you should stop corruption from spreading through the system as early as you can. This is done with an anti-corruption layer which in your case would be responsible for interpreting CRUD calls and transforming them into more meaningful business processes.
The anti-corruption layer should sit between the parts of the system you want to protect and the legacy/misbehaving/etc part. That would be option #2. In this case the anti-corruption code will most likely have to compare the current state with the new state to try and figure out what changes were done and how to correlate these to more explicit business processes.
Like you said, option 1 is pretty much against the ruleset. Additional offering a generic update is no good to the clients of your domain enitity.
I would go with a 2ish option: having an Application level service but reflecting the UL to it. Your controller would need to call a meaningful application service method with a meaningful parameter/command that changes the state of a Domain model.
I always try to think from the view of a client of my Service/Domain Model Code. As this client i want to know exactly what i call. Having a CRUD like Update is counter intiuitiv and doesn't help you to follow the UL and is more confusing to the clients. They would need to know the code behind that update method to know what they are changing.
To your Update: no don't include a generic update (atleast not with the name Update) always reflect business rules/processes. A client of your code would never know what i does.
In terms if this is a specific business process that gets triggered from a specific controller api endpoint you can call it that way. Let's say your Update is actually the business process DoAWorkItemReassignAndPostponeDueToEmployeeWentOnVacation() then you could bulk this operation but don't go with the generic Update. Always reflect UL.
Hi im new to ddd design and is trying to develop my first application using this pattern working in C#
In my application i have an aggregate Contract that have child entity assets, when an asset is added or settled i should perform an accounting operation in another aggregate Accounts and ensure it in business logic.
Should i create a domain service that ensures that each operation in contract assets will raise an account operation, and call this service in application layer sending a collection of account entity. Or should I inject repository to this service load the account list and save the changes in account and operations list.
Or even make the methods in asset entity raise an event that enforce account changes. If this is the right approach, the event handle should be in the domain or application? If in the domain should the handler in the account entity perform the changes through respository injected?
Im a bit confused
generally this kind of problems can be elegantly solved using events, and focusing on one aggregate per transaction.
Let's say your use case is to add an Asset to a Contract.
You will have an application service with a ContractRepository that will retrieve the Contract, and a method addAsset will be called on that Contract.
When you add an asset to your Contract aggregate, this aggregate will record a domain event, like AssetAdded, with all relevant information about that action. Then your application service will persist the updated Contract in the database and then it will publish the event to an asynchronous bus. In this moment you can send a response.
Some subscriber, inside your application, will be notified about that event and will do stuff. In this case you could have an UpdateAccountOnAssetAdded that internally will do the rest of the job.
This article will help you understand how everything is organized inside this kind of architecture.
Good luck!
Let’s take the last question first. Events are for things that can be done asynchronously and in this case async won’t work. Any time an aggregate is saved it should satisfy all business rules so you have to deal with asset and the account at the same time.
Services should be used sparingly. They operate on more than one AR where none has an enforced relationship with the others. In your case, Contract owns all the other entities involved so all work should be done inside a method on Contract. If that requires a repository, then inject it into the Contract.
I have question regarding ddd and bounded contexts.
Suppose there are two bounded contexts. In the first one the aggregate root is Customer who is able to publish an advertisement on a webpage. I suppose that falls in his behavior, in turn he has a method of PublishAdvertisement().
But the second bounded context has Advertisement as aggregate. That imposes that Advertisement has a Customer property, due to its nature of belonging to a Customer.
Both Customer and Advertisement are unique in the system and database.
My question is:
Is it advisable to delegate the creation of Advertisement from Customer to a factory or dependency injection?
Edit:
I thank you for your answers and apologize for the lack of info.
Dependency injection:
I was wondering what is the best manner to resolve a given situation. The company has a Stock of Advert templates, if a template is in stock its good for use, if it's not, then it's rented to someone. The company has a plan on having more Stocks. If a Customer wants to make an advert in these templates he chooses a template and if its in stock all is good to go. Reading this as it is I assumed that there should be a service(domain) CheckAvailability(template), due to the nature of the service it does not fit in a specific aggregate because it uses several aggregates with validations and queries the database. In future when there would be more Stocks(some rented from other companies, maybe someone else's database), I was planing on using dependency injection to add these Stocks to the service without changing the implementation. Question is , does this seem as a good idea?
Bounded contexts:
In regards to bounded contexts and database. Yes, there is one database object and two contexts that use the same database object. Order has a reference to Customer, due to belonging to a Customer, looks something like this
Order()
Customer customer(get; private set;)
///other properties and methods
I would appreciate any additional information via link, video, book in terms of implications of having 2 contexts like these (Customer->Order___1:M) relate to the same database. Thank you.
Both Customer and Advertisement are unique in the system and database.
If that is the case, then having these concepts in two bounded contexts that use the same DB objects is a problem! The separation between two bounded contexts is a strong one, so they shouldn't communicate by changing the same DB object.
So I think you have some major design issues there. Try to fix them first by creating a model that corresponds to the real-world problem, discuss it with your domain experts.
And now to answer your main question:
Creating entities through factories is a good idea. The factory hides the (potentially complex) mechanism to create an entity and provide it with the required services. The factory receives these services through DI in the first place, and can forward them to the entity during instantiation.
Absolutely.
One thing is associating domain objects and another thing is working with them. An ad has some associated customer, and the customer and ad must be created in their respective domain layers (i.e. repository and service at least...).
This is separating concerns in the right way, since you don't want customers to be created where ads are also created and vice versa.
I guess you already know the single responsibility principle.
What are the customer related invariants enforced by Customer.PublishAdvertisement() ?
If there aren't any, you'll be better off moving that method to the
Advertisement aggregate root in the other BC, perhaps making it a constructor or to an AdvertisementFactory if the construction logic is complex. Just because the physical world user who creates an ad is a Customer doesn't automatically imply that their aggregate root should have that method. The ad creation process can stay in the Advertisement BC, with an Advertisement application service as the entry point.
If there are, then Customer could emit an
AdvertisementPublished event that the Advertisement BC subscribes
to. You should be aware though that if you follow the "aggregate as
consistency boundary" good practice, Customer can't be immediately
consistent with Advertisement which means that there can be a delay
and inconsistencies can be introduced between when the event is emitted and when the Advertisement is persisted and thus visible to other clients.
It is usually not an issue when you are creating a new AR, but keep in mind that the state of the Customer that checked the invariants and decided to create the Advertisement can change and the invariants be violated in the mean time, before Advertisement is persisted.
Obviously, given that the 2 BCs share a common database (which is probably not a good idea as #theDmi pointed out), you could decide to break that rule and make your transaction span across the 2 aggregates. Not necessarily that bad if you just persist a new Advertisement and not modify one that can potentially be accessed concurrently.
As far as dependency injection, I can't see the connection here -- what is the dependency to be injected ?
I'm try to understand Repository pattern to implement it in my app. And I'm stuck with it in a some way.
Here is a simplified algorithm of how the app is accessing to a data:
At first time the app has no data. It needs to connect to a web-service to get this data. So all the low-level logic of interaction with the web-service will be hiding behind the WebServiceRepository class. All the data passed from the web-service to the app will be cached.
Next time when the app will request the data this data will be searched in the cache before requesting them from the web-service. Cache represents itself as a database and XML files and will be accessed through the CacheRepository.
The cached data can be in three states: valid (can be shown to user), invalid (old data that can't be shown) and partly-valid (can be shown but must be updated as soon as possible).
a) If the cached data is valid then after we get them we can stop.
b) If the chached data is invalid or partly-valid we need to access WebServiceRepository. If the access to the web-service is ended with a success then requested data will be cached and then will be showed to user (I think this must be implemented as a second call to the CacheRepository).
c) So the entry point of the data access is the CacheRepository. Web-service will be called only if there is no fully valid cache.
I can't figure out where to place the logic of verifying the cache (valid/invalid/partly-valid)? Where to place the call of the WebServiceRepository? I think that this logic can't be placed in no one of Repositories, because of violation the Single Responsibility Principle (SRP) from SOLID.
Should I implement some sort of RepositoryService and put all the logic in it? Or maybe is there a way to link WebServiceRepository and WebServiceRepository?
What are patterns and approaches to implement that?
Another question is how to get partly-valid data from cache and then request the web-service in the one method's call? I think to use delegates and events. Is there other approaches?
Please, give an advice. Which is the correct way to link all the functionality listed above?
P.S. Maybe I described all a bit confusing. I can give some additional clarifications if needed.
P.P.S. Under CacheRepository (and under WebServiceRepository) I meant a set of repositories - CustomerCacheRepository, ProductCacheRepository and so on. Thanks #hacktick for the comment.
if your webservice gives you crud methods for different entities create a repository for every entityroot.
if there are customers create a CustomerRepository. if there are documents with attachments as childs create a DocumentRepository that returns documents with attachments as a property.
a repository is only responsible for a specific type of entity (ie. customers or documents). repositories are not used for "cross cutting concerns" such as caching. (ie. your example of an CacheRepository)
inject (ie. StuctureMap) a IDataCache instance for every repository.
a call to Repository.GetAll() returns all entities for the current repository. every entity is registered in the cache. note the id of that object in the cache.
a call to Repository.FindById() checks the cache first for the id. if the object is valid return it.
notifications about invalidation of an object is routed to the cache. you could implement client-side invalidation or push messages from the server to the client for example via messagequeues.
information about the status whether an object is currently valid or not should not be stored in the entity object itself but rather only in the cache.
DTO
I'm building a Web application I would like to scale to many users. Also, I need to expose functionality to trusted third parties via Web Services.
I'm using LLBLGen to generate the data access layer (using SQL Server 2008). The goal is to build a business logic layer that shields the Web App from the details of DAL and, of course, to provide an extra level of validation beyond the DAL. Also, as far as I can tell right now, the Web Service will essentially be a thin wrapper over the BLL.
The DAL, of course, has its own set of entity objects, for instance, CustomerEntity, ProductEntity, and so forth. However, I don't want the presentation layer to have access to these objects directly, as they contain DAL specific methods and the assembly is specific to the DAL and so on. So, the idea is to create Data Transfer Objects (DTO). The idea is that these will be, essentially, plain old C#/.NET objects that have all the fields of, say, a CustomerEntity that are actually the database table Customer but none of the other stuff, except maybe some IsChanged/IsDirty properties. So, there would be CustomerDTO, ProductDTO, etc. I assume these would inherit from a base DTO class. I believe I can generate these with some template for LLBLGen, but I'm not sure about it yet.
So, the idea is that the BLL will expose its functionality by accepting and returning these DTO objects. I think the Web Service will handle converting these objects to XML for the third parties using it, many may not be using .NET (also, some things will be script callable from AJAX calls on the Web App, using JSON).
I'm not sure the best way to design this and exactly how to go forward. Here are some issues:
1) How should this be exposed to the clients (The presentation tier and to the Web Service code)
I was thinking that there would be one public class that has these methods, every call would be be an atomic operation:
InsertDTO, UpdateDTO, DeleteDTO, GetProducts, GetProductByCustomer, and so forth ...
Then the clients would just call these methods and pass in the appropriate arguments, typically a DTO.
Is this a good, workable approach?
2) What to return from these methods? Obviously, the Get/Fetch sort of methods will return DTO. But what about Inserts? Part of the signature could be:
InsertDTO(DTO dto)
However, when inserting what should be returned? I want to be notified of errors. However, I use autoincrementing primary keys for some tables (However, a few tables have natural keys, particularly many-to-many ones).
One option I thought about was a Result class:
class Result
{
public Exception Error {get; set;}
public DTO AffectedObject {get; set;}
}
So, on an insert, the DTO would get its get ID (like CustomerDTO.CustomerID) property set and then put in this result object. The client will know if there is an error if Result.Error != null and then it would know the ID from the Result.AffectedObject property.
Is this a good approach? One problem is that it seems like it is passing a lot of data back and forth that is redundant (when it's just the ID). I don't think adding a "int NewID" property would be clean because some inserts will not have a autoincrementing key like that. Another issue is that I don't think Web Services would handle this well? I believe they would just return the base DTO for AffectedObject in the Result class, rather than the derived DTO. I suppose I could solve this by having a LOT of the different kinds of Result objects (maybe derived from a base Result and inherit the Error property) but that doesn't seem very clean.
All right, I hope this isn't too wordy but I want to be clear.
1: That is a pretty standard approach, that lends itself well to a "repository" implementation for the best unit-testable approach.
2: Exceptions (which should be declared as "faults" on the WCF boundary, btw) will get raised automatically. You don't need to handle that directly. For data - there are three common approaches:
use ref on the contract (not very pretty)
return the (updated) object - i.e. public DTO SomeOperation(DTO item);
return just the updated identity information (primary-key / timestamp / etc)
One thing about all of these is that it doesn't necessitate a different type per operation (contrast your Result class, which would need to be duplicated per DTO).
Q1: You can think of your WCF Data Contract composite types as DTOs to solve this problem. This way your UI layer only has access to the DataContract's DataMember properties. Your atomic operations would be the methods exposed by your WCF Interface.
Q2: Configure your Response data contracts to return a new custom type with your primary keys etc... WCF can also be configured to bubble exceptions back to the UI.