I have a particular scenario where an aggregate has behavior to check whether an address is valid. This validation is triggered on the aggregate via inline ajax form validation on a web site. In between the aggregate and the web site is an application service which orchestrates the two.
As it stands, I create what is essentially an empty aggregate and set the address property so the check can be done. Based on this I return true or false back to the web site (ASP.NET MVC). This doesn't seem like the right approach on the context of DDD.
public bool IsAddressAvailable(string address)
{
var aggregate = new Aggregate
{
Address = address
};
return aggregate.IsAddressValid();
}
What options do I have that would work better using DDD? I was consider separating it out into a domain service. Any advice would be appreciated!
Normally your aggregates should not expose Get- methods, you always want to follow Tell-Don't-Ask principle.
If something needs to be done - then you call an aggregate method and it makes it done.
But you normally don't want to ask Aggregate if the data is valid or not. Especially if you already have a service that does this job for you, why mixing this "validation" with aggregates?
The rule of thumb is:
If something is not needed for Aggregate's behavior it doesn't need to be a part of the aggregate
You only pass valid data into your domain. It means that when you call an aggregate behavior asking it to do something for you, the data you pass is already validated. You don't want to pollute your domain with data validation / if-else branches, etc. Keep it straight and simple.
In your case, as far as I understand, you only need to validate user's input, so you don't need to bother your domain to do it for two reasons:
You don't do anything, don't change system's state. It is considered to be a "read" operation, do it straightforward (call your service, validate against some tables, etc)
You cannot rely on validation result. Now it tells you "correct" and in 10 milliseconds (while you get the response over the wire, while HTML is rendered in browser, etc) it is already a history, it MAY change any time. So this validation is just a guidance, no more.
Therefore if you only need "read-only" validation just do it against your service.
If you need to validate user's data as a part of operation then do it before you call the domain (perhaps in your command handler).
And be aware of racing conditions (DB unique constraints can help).
You should also consider reading this to think deeper about set validation: http://codebetter.com/gregyoung/2010/08/12/eventual-consistency-and-set-validation/
Related
I'm studying CQRS and FluentValidations and I found myself wondering where would be the best place to validate information within the database: directly in the handlers responsible for orchestrating the requests or within the validations even before reaching the handlers?
I have a business rule that involves the identity of an entity, where in the database this attribute is a unique key, that is, this document can only be linked to a single record. If the document already exists in the database and there is an attempt to insert it again, the database will return an error of contraint violated and I don't want that to happen, I want to handle as few exceptions as possible coming directly from the database.
So, I want to create a rule that validates at the time of insertion if the document in question is already linked to another entity and if it is already I will return an error stating that it is already linked to a record.
I managed to do this validation both ways. The first way was to validate this document directly in the handler, where I research if it is already linked to a record in the database, if it is, I return the error. The second way was to add a dependency injection inside the class that does the validations using FluentValidation.
So, my question is, what is the recommended practice for this situation?
In any application you can split up validation in two groups: superficial validation and domain validation.
Superficial validation
Superficial validation is validation to check if all values are in the right form. This is probably the first thing you do.
For example, you want to validate if a string is not empty or that a field falls in a range of predefined values.
A good way to do this is to parse the primitive values to classes which represent the constrains of the field.
For example, a string value that cant be empty should be parsed to a Text class, which never can have a empty value.
With this approach you'll never have to do this superficial validation anywhere else in your app because you only work with the Text class which never can be empty.
Domain validation
Domain validation is about business rules. This kind of validation is performed in the application or domain layer of your app.
A domain model is responsible to protect his own state. So the most of the validation goes in a domain model.
If you need to validate state over multiple domain models, you would check this in the application layer. This is where your command handlers should be.
In your example this is the place where you would check if the document is linked to a single record. In other words, this kind of validation is done in your command handlers.
It's a good idea to use unique and foreign-key constraints in your database as the last line of defence against data corruption.
But having validation in both places makes things more complex, harder to test, and for others to understand. Reducing the complexity should be every developer's goal.
I like this quote:
https://twitter.com/BrentO/status/1227589450814894080
What should we do in case when we have UI that's not task based with tasks corresponding to our entity methods which, in turn, correspond to ubiquitous language?
For example, lets say we have a domain model for WorkItem that has properties: StartDate, DueDate, AssignedToEmployeeId, WorkItemType, Title, Description, CreatedbyEmployeeId.
Now, some things can change with the WorkItem and broken down, it boils to methods like:
WorkItem.ReassignToAnotherEmployee(string employeeId)
WorkItem.Postpone(DateTime newDateTime)
WorkItem.ExtendDueDate(DateTime newDueDate)
WorkItem.Describe(string description)
But on our UI side there is just one form with fields corresponding to our properties and a single Save button. So, CRUD UI. Obviously, that leads to have a single CRUD REST API endpoint like PUT domain.com/workitems/{id}.
Question is: how to handle requests that come to this endpoint from the domain model perspective?
OPTION 1
Have CRUD like method WorkItem.Update(...)? (this, obviously, defeats the whole purpose of ubiquitous language and DDD)
OPTION 2
Application service that is called by endpoint controller have method WorkItemsService.Update(...) but within that service we call each one of the domain models methods that correspond to ubiquitous language? something like:
public class WorkItemService {
...
public Update(params) {
WorkItem item = _workItemRepository.get(params.workItemId);
//i am leaving out check for which properties actually changed
//as its not crucial for this example
item.ReassignToAnotherEmployee(params.employeeId);
item.Postpone(params.newDateTime);
item.ExtendDueDate(params.newDueDate);
item.Describe(params.description);
_workItemRepository.save(item);
}
}
Or maybe some third option?
Is there some rule of thumb here?
[UPDATE]
To be clear, question can be rephrased in a way: Should CRUD-like WorkItem.Update() ever become a part of our model even if our domain experts express it in a way we want to be able update a WorkItem or should we always avoid it and go for what does "update" actually mean for the business?
Is your domain/sub-domain inherently CRUD?
"if our domain experts express it in a way we want to be able update a
WorkItem"
If your sub-domain aligns well with CRUD you shouldn't try to force a domain model. CRUD is not an anti-pattern and can actually be the perfect fit for certain sub-domains. CRUD becomes problematic when business experts are expressing rich business processes that are wrongly translated to CRUD UIs & backends by developers, leading to code/UL misalignment.
Note that business processes can also be expensive to discover & model explicitly. Sometimes (e.g. lack of resources) it may be acceptable to let those live in the heads of domain experts. They will drive a simple CRUD UI from paper-based processes as opposed to having the system guide them. CRUD may be perfectly fine here since although processes are complex, we aren't trying to model them in the system which remains simple.
I can't tell whether or not your domain is inherently CRUD, but I just wanted to point out that if it is, then embrace it and go for simpler business logic patterns (Active Record, Transaction Script, etc.). If you find yourself constantly wanting to map every bit of data with a single method call then you may be in a CRUD domain.
Isolate corruption
If you settle that a domain model will benefit your model, then you should stop corruption from spreading through the system as early as you can. This is done with an anti-corruption layer which in your case would be responsible for interpreting CRUD calls and transforming them into more meaningful business processes.
The anti-corruption layer should sit between the parts of the system you want to protect and the legacy/misbehaving/etc part. That would be option #2. In this case the anti-corruption code will most likely have to compare the current state with the new state to try and figure out what changes were done and how to correlate these to more explicit business processes.
Like you said, option 1 is pretty much against the ruleset. Additional offering a generic update is no good to the clients of your domain enitity.
I would go with a 2ish option: having an Application level service but reflecting the UL to it. Your controller would need to call a meaningful application service method with a meaningful parameter/command that changes the state of a Domain model.
I always try to think from the view of a client of my Service/Domain Model Code. As this client i want to know exactly what i call. Having a CRUD like Update is counter intiuitiv and doesn't help you to follow the UL and is more confusing to the clients. They would need to know the code behind that update method to know what they are changing.
To your Update: no don't include a generic update (atleast not with the name Update) always reflect business rules/processes. A client of your code would never know what i does.
In terms if this is a specific business process that gets triggered from a specific controller api endpoint you can call it that way. Let's say your Update is actually the business process DoAWorkItemReassignAndPostponeDueToEmployeeWentOnVacation() then you could bulk this operation but don't go with the generic Update. Always reflect UL.
I'm trying to find the security flaws within my ASP.NET login page. How would I go about sanitizing the user input so that XSS and SQL injection is not possible? I feel that my Linq queries are secure but I could be wrong. Please let me know if you need any more information.
String username = usernameTxt.Text.toString();
Company check = (from u in context.Company
where u.companyadminUserName.Equals(username)
select u).FirstOrDefault();
if (check == null)
{
return BAD_USER;
}
else
{
return GOOD_USER;
}
Well, this is where it helps to know something about the implementation of LINQ-to-SQL you are using. I would imagine that every implementation would by default escape arguments to the LINQ extension methods, but never hurts to double-check.
In general, you want there to always be a middle-man sitting between the user interface and the service layer. You probably have them all over your apps; validators. They are more obviously-necessary in the case of validating that a number is a number or that a zip code is a zip code, but nowhere are the stakes higher than for validating that all user input is not unescaped. But that's not good enough - that's just the beginning.
What I would recommend is to institute something like what ASP.NET provides at the boundary - to prevent HTML from being inputted into user interface inputs as well - but also at the boundary between your thin controller and thick service, or as its own layer.
In such a design, maybe you can implement an attribute or annotation like InterrogateTheCarrierAttribute on every database input parameter (assuming you have parameterized into functions all of your database calls, and you are not concatenating strings to make queries). And on anything like it: every Powershell call wrapper or sh wrapper or function to access bank accounts and withdraw money, etc. Then, as the objects make their way in one form or another, every time they cross a "validation boundary," they have to be sanitized as if there is no guarantee of sanitation beyond. This is overkill but unless the validation is costly, why not?
Think of this: instead of bad user interface input, what if a text file or a trusted web service becomes compromised instead? If you think about it in terms of "this tier, this tier," you can convince yourself that there is safety where there is not. Microsoft's web server would never send you malformed packets, right? Your logger would never send your database bad SQL, right? But it can happen. So with the "cross-cutting" approach to validation, you always validate at the boundaries, at least on the way in, and possibly on the way out. And by doing so, you make it less likely for a bad assumption to leave your database interface wide open.
Example: take your query. The issue at stake is 1) whether username gets escaped, and 2) whether it is possible for an unescaped username to somehow overcome the fact that LINQ to SQL is a major abstraction and doesn't lend itself immediately to injection.
Now you should know the answers to those, but our system shouldn't require omniscience to be effective. So, I propose implementing a cross-cutting layer for validation, however you want to do it.
I'm try to understand Repository pattern to implement it in my app. And I'm stuck with it in a some way.
Here is a simplified algorithm of how the app is accessing to a data:
At first time the app has no data. It needs to connect to a web-service to get this data. So all the low-level logic of interaction with the web-service will be hiding behind the WebServiceRepository class. All the data passed from the web-service to the app will be cached.
Next time when the app will request the data this data will be searched in the cache before requesting them from the web-service. Cache represents itself as a database and XML files and will be accessed through the CacheRepository.
The cached data can be in three states: valid (can be shown to user), invalid (old data that can't be shown) and partly-valid (can be shown but must be updated as soon as possible).
a) If the cached data is valid then after we get them we can stop.
b) If the chached data is invalid or partly-valid we need to access WebServiceRepository. If the access to the web-service is ended with a success then requested data will be cached and then will be showed to user (I think this must be implemented as a second call to the CacheRepository).
c) So the entry point of the data access is the CacheRepository. Web-service will be called only if there is no fully valid cache.
I can't figure out where to place the logic of verifying the cache (valid/invalid/partly-valid)? Where to place the call of the WebServiceRepository? I think that this logic can't be placed in no one of Repositories, because of violation the Single Responsibility Principle (SRP) from SOLID.
Should I implement some sort of RepositoryService and put all the logic in it? Or maybe is there a way to link WebServiceRepository and WebServiceRepository?
What are patterns and approaches to implement that?
Another question is how to get partly-valid data from cache and then request the web-service in the one method's call? I think to use delegates and events. Is there other approaches?
Please, give an advice. Which is the correct way to link all the functionality listed above?
P.S. Maybe I described all a bit confusing. I can give some additional clarifications if needed.
P.P.S. Under CacheRepository (and under WebServiceRepository) I meant a set of repositories - CustomerCacheRepository, ProductCacheRepository and so on. Thanks #hacktick for the comment.
if your webservice gives you crud methods for different entities create a repository for every entityroot.
if there are customers create a CustomerRepository. if there are documents with attachments as childs create a DocumentRepository that returns documents with attachments as a property.
a repository is only responsible for a specific type of entity (ie. customers or documents). repositories are not used for "cross cutting concerns" such as caching. (ie. your example of an CacheRepository)
inject (ie. StuctureMap) a IDataCache instance for every repository.
a call to Repository.GetAll() returns all entities for the current repository. every entity is registered in the cache. note the id of that object in the cache.
a call to Repository.FindById() checks the cache first for the id. if the object is valid return it.
notifications about invalidation of an object is routed to the cache. you could implement client-side invalidation or push messages from the server to the client for example via messagequeues.
information about the status whether an object is currently valid or not should not be stored in the entity object itself but rather only in the cache.
Ok guys, another my question is seems to be very widely asked and generic. For instance, I have some accounts table in my db, let say it would be accounts table. On client (desktop winforms app) I have appropriate functionality to add new account. Let say in UI it's a couple of textboxes and one button.
Another one requirement is account uniqueness. So I can't add two same accounts. My question is should I check this account existence on client (making some query and looking at result) or make a stored procedure for adding new account and check account existence there. As it for me, it's better to make just a stored proc, there I can make any needed checks and after all checks add new account. But there is pros and cons of that way. For example, it will be very difficult to manage languagw of messages that stored proc should produce.
POST EDIT
I already have any database constraints, etc. The issue is how to process situation there user is being add an existence account.
POST EDIT 2
The account uniqueness is exposed as just a simple tiny example of business logic. My question is more abour handling complicated business logic on that accounts domain.
So, how can I manage this misunderstanding?
I belive that my question is basic and has proven solution. My tools are C#, .NET Framework 2.0. Thanks in advance, guys!
If the application is to be multi-user ( i.e. not just a single desktop app with a single user, but a centralised DB with the app acting as clients maybe on many workstations), then it is not safe to rely on the client (app) to check for such as uniqueness, existance, free numbers etc as there is a distinct possibility of change happening between calls (unless read locking is used, but this often become more of an issue than a help!).
There is the ability of course to precheck and then recheck (pre at app level, re at DB), but of course this would give extra DB traffic, so depends on whether it is a problem for you.
When I write SPROCs that will return to an app, I always use the same framework - I include parameters for a return code and message and always populate them. Then I can use standard routines to call them and even add in the parameters automatically. I can then either display the message directly on failure, or use the return code to localize it as required (or automate a response). I know some DBs (like SQL Svr) will return Return_Code parameters, but I impliment my own so I can leave inbuilt ones for serious system based errors and unexpected failures. Also allows me to have my own numbering systems for return codes (i.e. grouping them to match Enums in the code and/or grouping by severity)
On web apps I have also used a different concept at times. For example, sometimes a request is made for a new account but multiple pages are required (profile for example). Here I often use a header table that generates a hidden user ID against the requested unique username, a timestamp and someway of recognising them (IP Address etc). If after x hours it is not used, the header table deletes the row freeing up the number (depending on DB the number may never become useable again - this doesn;t really matter as it is just used to keep the user data unique until application is submitted) and the username. If completed correctly, then the records are simply copied across to the proper active tables.
//Edit - To Add:
Good point. But account uniqueness is just a very tiny simple sample.
What about more complex requirements for accounts in business logic?
For example, if I implement in just in client code (in winforms app) I
will go ok, but if I want another (say console version of my app or a
website) kind of my app work with this accounts I should do all this
logic again in new app! So, I'm looking some method to hold data right
from two sides (server db site and client side). – kseen yesterday
If the requirement is ever for mutiuse, then it is best to separate it. Putting it into a separate Class Library Project allows the DLL to be used by your WinForm, Console program, Service, etc. Although I would still prefer rock-face validation (DB level) as it is closest point in time to any action and least likely to be gazzumped.
The usual way is to separate into three projects. A display layer [DL] (your winform project/console/Service/etc) and Business Application Layer [BAL] (which holds all the business rules and calls to the DAL - it knows nothing about the diplay medium nor about the database thechnology) and finally the Data Access Layer [DAL] (this has all the database calls - this can be very basic with a method for insert/update/select/delete at SQL and SPROC level and maybe some classes for passing data back and forth). The DL references only the BAL which references the DAL. The DAL can be swapped for each technology (say change from SQL Server to MySQL) without affecting the rest of the application and business rules can be changed and set in the BAL with no affect to the DAL (DL may be affected if new methods are added or display requirement change due to data change etc). This framework can then be used again and again across all your apps and is easy to make quite drastic changes to (like DB topology).
This type of logic is usually kept in code for easier maintenance (which includes testing). However, if this is just a personal throwaway application, do what is most simple for you. If it's something that is going to grow, it's better to put things practices in place now, to ease maintenance/change later.
I'd have a AccountsRepository class (for example) with a AddAcount method that did the insert/called the stored procedure. Using database constraints (as HaLaBi mentioned), it would fail on trying to insert a duplicate. You would then determine how to handle this issue (passing a message back to the ui that it couldn't add) in the code. This would allow you to put tests around all of this. The only change you made in the db is to add the constraint.
Just my 2 cents on a Thrusday morning (before my cup of green tea). :)
i think the answer - like many - is 'it depends'
for sure it is a good thing to push logic as deeply as possible towards the database. This prevent bad data no matter how the user tries to get it in there.
this, in simple terms, results in applications that TRY - FAIL - RECOVER when attempting an invalid transaction. you need to check each call(stored proc, or triggered insert etc) and IF something bad happens, recover from that condition. Usually something like tell the user an issue occurred, reset the form or something, and let them try again.
i think at a minimum, this needs to happen.
but, in addition, to make a really nice experience for the user, the app should also preemptively check on certain data conditions ahead of time, and simply prevent the user from making bad inserts in the first place.
this is of course harder, and sometimes means double coding of business rules (one in the app, and one in the DB constraints) but it can make for a dramatically better user experience.
The solution is more of being methodical than technical:
Implement - "Defensive Programming" & "Design by Contract"
If the chances of a business-rule being changed over time is very less, then apply the constraint at database-level
Create a "validation or rules & aggregation layer (or class)" that will manage such conditions/constraints for entity and/or specific property
A much smarter way to do this would be to make a user-control for the entity and/or specific property (in your case the "Account-Code"), which would internally use the "validation or rules & aggregation layer (or class)"
This will allow you to ensure a "systematic-way-of-development" or a more "scalable & maintainable" application-architecture
If your application is a website then along with placing the validation on the client-side it is always better to have validation even in the business-layer or C# code as well
When ever a validation would fail you could implement & use a "custom-error-message" library, to ensure message-content is standard across the application
If the errors are raised from database itself (i.e., from stored-procedures), you could use the same "custom-error-message" class for converting the SQL Exception to the fixed or standardized message format
I know that this is all a bit too much, but is will always good for future.
Hope this helps.
As you should not depend on a specific Storage Provider (DB [mysql, mssql, ...], flat file, xml, binary, cloud, ...) in a professional project all constraint should be checked in the business logic (model).
The model shouldn't have to know anything about the storage provider.
Uncle Bob said something about architecture and databases: http://blog.8thlight.com/uncle-bob/2011/11/22/Clean-Architecture.html