Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 5 years ago.
Improve this question
I'm new to TDD and ATDD, and I'm seeking to understand the connection between a user story and its acceptance criteria. For context, I'm building a 3-tier application in C#, with an MVC front-end.
Say, for instance, that I have the following user story:
In order to ensure Capacity Data is entered correctly
As a person updating this data
I want feedback when the data entered doesn't conform to our business rules.
It makes sense to me to break this down and define what "Capacity Data" is, and the business rules that govern this.
For instance, maybe it has a "Number of Machines" property that has to be greater than zero.
What I want to avoid doing is testing the framework--and if I follow correctly, what I want to do is test that this business logic is correctly implemented, i.e. that:
Business rules ("Number of machines must be greater than zero", and others) are correctly implemented in the codebase.
If a business rule is violated, the user is alerted of this mistake.
I believe I could test rule #2 by validating that invalid model state in the controller redirects to the same page, for instance--and there's TONS of examples of that.
However, doesn't doing this require putting decorations on the viewmodel--and that ultimately this implements the business rule from the perspective of the user? (thus satisfying #1?)
Let's say I have the following sort-of statement/unit-test:
[Test]
public void GivenCapacityModelWhenNumMachinesZeroOrLessThenModelShouldBeInvalid()
{
// Given
IValidatorThing validator = new ValidatorThing(); //What enforces the rule? Should this be a controller service? Or a decorator such as [Range(0.000001,1000000)]? Doesn't each require different testing methods?
var invalidModel = new CapacityModel(); // Or the viewmodel?
double zeroValue = 0.000001;
invalidModel.NumMachines = zeroValue;
// When
var modelIsValid = ValidatorThing.validateModel(invalidModel);
// Then
Assert.IsFalse(modelIsValid);
}
The above won't compile, of course. I've left out any particular mocking or fixturing framework out for now, to keep it simple. So, to make this test at least compile (but still fail), I have some decisions to make:
Is CapacityModel supposed to be a viewmodel? Or a DTO from the service layer? Or a metadata class in the DAL layer? I can implement either and make the test pass...but what should I really be testing?
Is the "validator" checking the behavior of a service that validates this model property? Or data annotations on the CapacityModel? Again, what should I really be testing in the context of a 3-tier application?
Some things to consider:
One thing I will know is that the database tables will have constraints that describe these rules--so it would seem that the purpose of this rule is really to communicate these rules to whoever is using the application. In that case, could I safely assume it would violate DRY to have have the rules appear in three places: the viewmodel, data entity, and database tables.
The reason we have these rules in the database is because we want to ensure if a DBA needs to mess with the records that the rules aren't accidentally violated. However, to my knowledge there isn't a great way to translate those CONSTRAINT rules up to the DAL of the application...so I suppose they would need to be repeated at least one more time in the application for sake of communicating them to the user.
So, if I were to write a unit test to fulfill the business rule wouldn't I be writing only to ensure the rules mirror the database? And separately, also writing a unit test that ensures the correct message is displayed to the user?
Any guidance you can offer would be wonderful. I want to feel that the decisions I've made were reasonable, or at least have an idea of alternative ways to solve the problem better.
Thanks,
Lawrence
EDIT:
So, ultimately I was trying to drive at a centralized way of managing validation in the application so that I could have separation of concerns--i.e., that the controller only cared about routing, the viewmodels only cared about displaying data, validators only cared about validation, etc...as opposed to having validation in the viewmodel, for instance.
I found a very helpful article that helped me to grasp how to do this, using the existing MVC infrastructure. Hopefully this will help others looking into these types of scenarios.
I suspect you may be blurring the boundary between unit tests and acceptance tests.
An acceptance test is something that is very business user focused. It treats the application as a black box but checks to confirm that the interface to the application behaves in the way the user would expect it to.
In your example I would see an acceptance test as something like:
For a simple business rule (number of machines must be greater than zero), ensure that correct feedback is given to the user in the event of the business rule being violated.
I would have a chat with the Product Owner at this stage to understand what they regard as 'correct feedback' and how they want it to be displayed.
The important thing here is that you are not testing how business rules are evaluated or what the internal mechanism is for handling errors. You are purely focused on the end-user interaction.
Of course you will also want to implement unit testing to ensure that your technical solution is solid and this is where you go in to details about where business logic is implemented.
How you handle business logic is very much a design decision. Personally, if I had the business logic in the database I would also have a table containing rule descriptions that would be used as a look-up in the event of a rule being violated. The other layers of the application would know nothing of the business logic, but would know how to pass through the error message.
Related
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 4 years ago.
Improve this question
Let's imagine I'm using MVC. controller got the request from user. do the logic in controller and return the response.
People say that controllers don't do any logic, they simply give the incoming request to service and all the logic stays in service classes methods. But I don't understand why this is good.
Argument 1) people say this is good because you have skinny controllers instead of fat controllers, but who cares skinny controllers if it doesn't give you benefit?
Argument 2) your business logic is somewhere else and not coupled to controller. Why is this argument even worth mentioning? Okay, I have all the logic in service classes' methods and in my controllers they're two lines of code. What did that give me? Service classes are still huge. Is this some kind of benefit?
Argument 3) one engineer even told me that service layer is good because from service methods you return objects to controller and in controller we sometimes return json or some other format. He told me this is good if we have desktop/web/mobile application all together and we are writing api for them. but still doesn't make sense.
What people do and I hate is that they use repository and service (in service methods, they have business logic and repository classes method calls).
Here is how I think. If using service classes (I call it helpers), in a service method, there shouldn't be a thing related to framework. if there's fraemwork dependent code, then it's bad because all my business logic is tightly coupled to framework. What my friend advised is that I put get,insert,update eloquent calls in controller and pass the results to helper (service) which does the data modification. This way to test a helper (service) no need to inject repository or model at all. And why do we have to even need to test repository or model (it's already tested by the framework).
I just have to understand why service layer is going to help me. The thing is I've read so much, and none of the articles really say the real benefits. Is it possible we discuss pros and cons with examples?
Abstraction.
The theory that most people subscribe to is that all functions should do one thing only and one thing well. Keeping this in mind, one huge controller method doesn’t make sense.
but what about just using lots of private methods in your controller file?
It’s arguably harder to debug private methods than public ones because you typically can’t access them to unit test them. Why not just make them public and still keep them in the same file? Because that’s not how we do things. Separation of concern is very important in the MVC model.
Here’s how a controller should work in MVC:
Take an input
Do stuff (doesn’t matter what that stuff is)
Output
The controller shouldn’t care about the business logic - complicated logic should just be a black box that just works as far as the controller is concerned.
Moreover, if you have external API calls, the controller should never be doing them directly. They should be hidden away in a connectors package and accessed via a service layer.
I think the main point of using a service layer is that if your business logic ever changes, your controller shouldn’t care. Your controller should be as “stupid” as possible.
In order to make each layer of your app as reliable and predictable as possible, you need to make sure each layer has a defined purpose - the controller shouldn’t be taking in inputs, doing complex logic, and giving outputs. Obviously if the logic is tiny then abstracting it is a bit of overkill but it’s a good habit to get into.
Finally, it makes debugging your code easier for other people. If you move on from this project and someone else has to pick up where you left off, if your code is all in one place then they will hate you. Finding bugs and making improvements is very hard when everything is all together. If you follow convention and separate your business logic away from your controller, you will make other people’s lives easier as they’ll know what to expect.
Basically, just do it. It’s a good practice to get into and will make your life easier in the future.
I will try to give simple answer
Try to write unitest for fat controller ...
It is better to test class or even interface method instead of controller method which returns view. Better to test method which have less preconditions and responsibility. Controller's methods is final method which intergate all logic. Http processing, Validation, business, and so on. For unit testing,debuging and reusing it is better to keep this logical parts separately.
I'm creating an application which's architecture is based on Uncle Bob's Clean Architecture concepts and DDD. Note that it is BASED on DDD, so I gave myself the freedom to differ from strict DDD.
To create this application, I am using C# with .Net Standard 2.0
One of the principles of DDD relates to Value Objects. The definition for Value Objects, according to Wikipedia is as follows:
Value Object
An object that contains attributes but has no conceptual identity. They should be treated as immutable.
Example: When people exchange business cards, they generally do not distinguish between each unique card; they only are concerned about the information printed on the card. In this context, business cards are value object
Now, I want that my Value Objects does not allow their creation if some validation does not succeed. When it happens, an exception would be thrown during the instantiation. I really meant to throw an exception there, because the core of the architecture really does not expect any invalid data to reach that point.
Before going further on this question, to give you guys some more background, here is my architecture (NOTE: still incomplete):
The rules I am following in this architecture is:
A layer can only know about its immediate innermost neighbor layer's interfaces
A layer cannot know anything about any outermost layer
All communications between layers MUST be done through interfaces
Each layer must be independently deployable
Each layer must be independently developable
To better understand the arrows in this diagram, I recommend reading those Stack Exchanges's questions:
Explanation of the UML arrows
https://softwareengineering.stackexchange.com/questions/61376/aggregation-vs-composition
Now, the challenge I'm facing right now, is finding a good way to use the validators. I'm not satisfied with my architecture in this point. The problem is the following:
Since I can have thousands of Value Objects being instantiated at a given time, I don't want each instance of the Value Objects to have an instance method to perform the validation. I want the validation method to be static, since it's logic will be the same for every instance. Also, I want the validation logic to be available for the upper layer of the architecture to use to perform validations without trying to instantiating the Value Objects, thus causing an expensive exception to be thrown.
The problem is: C# DOES NOT ALLOW polymorphism with static methods, so I can't do something like:
internal interface IValueObject<T>
{
T Value { get; }
static bool IsValid(T value);
}
How can I achieve this functionality without relying on static methods polymorphism and, at the same time, not wasting memory?
It's a good thing that you can think abstractly but you should generalize after you write some working code.
A general clean on-size-fit-all architecture DDD is a Mith. In fact DDD applies only to the Domain layer. That's the beauty of it, it's technology agnostic.
In my projects I don't even have a base class or an interface for Value objects, Entities or Aggregate roots. I don't need them. All the these building blocks are POPO (PHP).
In my opinion, a Clean architecture is the one that keeps the Domain layer technology agnostic, without dependencies to any external frameworks. The other layers could be almost anything.
I suggest you get rid of IsValid() and make your Value Objects self-checking, always valid objects. Making them immutable is recommended and will obviously help a lot in that regard. You only have to check the invariants once during creation.
[Edit]
You might need to treat that as a first pass of input validation instead of value object invariant enforcement. If there's a huge amount of unsafe data coming in that you want to turn into value objects, first handle it in a validation process in the outer layer - you can make all performance optimizations you need, implement error logic and coordinate VO creation there.
In clean architecture all business logic goes into use case interactors. Validation rules are part of the business logic so should go into use case interactors as well.
In ur case I would suggest to put ur validation in interactors which take a "request model", validate the "parameters" and then return the respective value object(s) as (part of the) response model.
This way the validation logic is in the right layer and value objects are only created when validation succeeds - so no invalid value objects are created and no performance is wasted.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 7 years ago.
Improve this question
I want to keep myself as short as possible:
First: I read related posts, but they didn't help a lot.
See: What is a quality real world example of TDD in action?
Or: How do you do TDD in a non-trivial application?
Or: TDD in ASP.NET MVC: where to start?
Background:
I'm not a total TDD beginner, I know the principles
I read Rob C Martin and MC Feathers and the like
TDD works fine for me in Bowling and TicTacToe Games
But I'm kind of lost when I want to to TDD in my workplace. It's not about Mocking, I kinda do know how to mock the dependecies.
It's more:
WHEN do I code WHAT?
WHERE do I begin?
And: WHEN and HOW do I implement the "database" or "file system" code. It's cool to mock it but at integration test stage I need id as real code.
Imagine this (example):
Write a program which reads a list of all customers from a database.
Related to the customer IDs it has to search data from a csv/Excel file.
Then the business logic does magic to it.
At the end the results are written to the database (different table).
I never found a TDD example for an application like that.
EDIT:
How would you as a programmer would implement this example in TDD style?
PS: I'm not talking about db-unit testing or gui unit testing.
You could start without a database entirely. Just write an interface with the most basic method to retrieve the customers
public interface ICustomerHandler
{
List<Customer> GetCustomers(int customerId);
}
Then, using your mocking framework, mock that interface while writing a test for a method that will use and refer to an implementation of the interface. Create new classes along the way as needed (Customer, for instance), this makes you think about which properties are required.
[TestMethod()]
public void CreateCustomerRelationsTest()
{
var manager = new CustomerManager(MockRepository.GenerateMock<ICustomerHandler>());
var result = manager.CreateCustomerRelations();
Assert.AreEqual(1, result.HappyCustomers);
Assert.AreEqual(0, result.UnhappyCustomers);
}
Writing this bogus test tells you what classes are needed, like a CustomerManager class which has a method CreateCustomerRelations and two properties. The method should refer to the GetCustomer method in the interface, using the instance of the mock that was being injected in the class constructor.
Do just enough to make the project build and let you run the test for the first time, which will fail as there's no logic in the method being tested. However, you are off on a great start with letting the test dictate which input your method should take, and what output it should receive and assert. Defining the test conditions first helps you in creating a good design. Soon you will have enough code written to ensure the test confirms your method is well designed and behaves the way you want it to.
Think about what behaviour you are testing, and use this to drive a single higher level test. Then as you implement this functionality use TDD to drive out the behaviour you want in the classes you need to implement this functionality.
In your example I'd start with a simple no-op situation. (I'll write it in BDD langauge but you could similarly implement this in code)
Given there are no customers in the database
When I read customers and process the related data from the csv file
Then no data should be written to the database
This sort of test will allow you to get some of the basic functionality and interfaces in place without having to implement anything in your mocks (apart from maybe checking that you are not calling the code to do the final write)
Then I'd move on to a slightly wider example
Given there are some customers in the database
But none of these customers are in the CSV file
When I read customers and process the related data from the csv file
Then no data should be written to the database
And I would keep adding incrementally and adjusting the classes needed to do this, initially probably using mocks but eventually working up to using the real database interactions.
I'd be wary of writing a test for every class though. This can make your tests brittle and they can need changing every time you make a small refactoring like change. focus on the behaviour, not the implementation
your flow should be something like this:
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 4 years ago.
Improve this question
I've received the go-ahead to start building the foundation for a new architecture for our code base at my company. The impetus for this initiative is the fact that:
Our code base is over ten years old and is finally breaking at the seams as we try to scale.
The top "layers", if you want to call them such, are a mess of classic ASP and .NET.
Our database is filled with a bunch of unholy stored procs which contain thousands of lines of business logic and validation.
Prior developers created "clever" solutions that are non-extensible, non-reusable, and exhibit very obvious anti-patterns; these need to be deprecated in short order.
I've been referencing the MS Patterns and Practices Architecture Guide quite heavily as I work toward an initial design, but I still have some lingering questions before I commit to anything. Before I get into the questions, here is what I have so far for the architecture:
(High-level)
(Business and Data layers in depth)
The diagrams basically show how I intend to break apart each layer into multiple assemblies. So in this candidate architecture, we'd have eleven assemblies, not including the top-most layers.
Here's the breakdown, with a description of each assembly:
Company.Project.Common.OperationalManagement : Contains components which implement exception handling policies, logging, performance counters, configuration, and tracing.
Company.Project.Common.Security : Contains components which perform authentication, authorization, and validation.
Company.Project.Common.Communication : Contains components which may be used to communicate with other services and applications (basically a bunch of reusable WCF clients).
Company.Project.Business.Interfaces : Contains the interfaces and abstract classes which are used to interact with the business layer from high-level layers.
Company.Project.Business.Workflows : Contains components and logic related to the creation and maintenance of business workflows.
Company.Project.Business.Components : Contains components which encapsulate business rules and validation.
Company.Project.Business.Entities : Contains data objects that are representative of business entities at a high-level. Some of these may be unique, some may be composites formed from more granular data entities from the data layer.
Company.Project.Data.Interfaces : Contains the interfaces and abstract classes which are used to interact with the data access layer in a repository style.
Company.Project.Data.ServiceGateways : Contains service clients and components which are used to call out to and fetch data from external systems.
Company.Project.Data.Components : Contains components which are used to communicate with a database.
Company.Project.Data.Entities : Contains much more granular entities which represent business data at a low level, suitable for persisting to a database or other data source in a transactional manner.
My intent is that this should be a strict-layered design (a layer may only communicate with the layer directly below it) and the modular break-down of the layers should promote high cohesion and loose coupling. But I still have some concerns. Here are my questions, which I feel are objective enough that they are suitable here on SO...
Are my naming conventions for each module and its respective assembly following standard conventions, or is there a different way I should be going about this?
Is it beneficial to break apart the business and data layers into multiple assemblies?
Is it beneficial to have the interfaces and abstract classes for each layer in their own assemblies?
MOST IMPORTANTLY - Is it beneficial to have an "Entities" assembly for both the business and data layers? My concern here is that if you include the classes that will be generated by LINQ to SQL inside the data access components, then a given entity will be represented in three different places in the code base. Obviously tools like AutoMapper may be able to help, but I'm still not 100%. The reason that I have them broken apart like this is to A - Enforce a strict-layered architecture and B - Promote a looser coupling between layers and minimize breakage when changes to the business domain behind each entity occur. However, I'd like to get some guidance from people who are much more seasoned in architecture than I am.
If you could answer my questions or point me in the right direction I'd be most grateful. Thanks.
EDIT:
Wanted to include some additional details that seem to be more pertinent after reading Baboon's answer. The database tables are also an unholy mess and are quasi-relational, at best. However, I'm not allowed to fully rearchitect the database and do a data clean-up: the furthest down to the core I can go is to create new stored procs and start deprecating the old ones. That's why I'm leaning toward having entities defined explicitly in the data layer--to try to use the classes generated by LINQ to SQL (or any other ORM) as data entities just doesn't seem feasible.
I would disagree with this standard layered architecture in favor of a onion architecture.
According to that, I can give a try at your questions:
1. Are my naming conventions for each module and its respective assembly following standard conventions, or is there a different way I
should be going about this?
Yes, I would agree that it is not a bad convention, and pretty much standard.
2. Is it beneficial to break apart the business and data layers into multiple assemblies?
Yes, but I rather have one assembly called Domain (usually Core.Domain) and other one called Data (Core.Data). Domain assembly contains all the entities (as per domain-driven-design) along with repository interfaces, services, factories etc... Data assembly references the Domain and implements concrete repositories, with an ORM.
3. Is it beneficial to have the interfaces and abstract classes for each layer in their own assemblies?
Depending on various reasons. In the answer to the previous question, I've mentioned separating interfaces for repositories into the Domain, and concrete repositories in Data assembly. This gives you clean Domain without any "pollution" from any specific data or any other technology. Generally, I base my code by thinking on a TDD-oriented level, extracting all dependencies from classes making them more usable, following the SRP principle, and thinking what can go wrong when other people on the team use the architecture :) For example, one big advantage of separating into assemblies is that you control your references and clearly state "no data-access code in domain!".
4. Is it beneficial to have an "Entities" assembly for both the business and data layers?
I would disagree, and say no. You should have your core entities, and map them to the database through an ORM. If you have complex presentation logic, you can have something like ViewModel objects, which are basically entities dumbed down just with data suited for representation in the UI. If you have something like a network in-between, you can have special DTO objects as well, to minimize network calls. But, I think having data and separate business entities just complicates the matter.
One thing as well to add here, if you are starting a new architecture, and you are talking about an application that already exists for 10 years, you should consider better ORM tools from LINQ-to-SQL, either Entity Framework or NHibernate (I opt for NHibernate in my opinion).
I would also add that answering to as many question as there are in one application architecture is hard, so try posting your questions separately and more specifically. For each of the parts of architecture (UI, service layers, domain, security and other cross-concerns) you could have multiple-page discussions. Also, remember not to over-architecture your solutions, and with that complicating things even more then needed!
I actually just started the same thing, so hopefully this will help or at least generate more comments and even help for myself :)
1. Are my naming conventions for each module and its respective assembly following standard conventions, or is there a different way I should be going about this?
According to MSDN Names of Namespaces, this seems to be ok. They lay it out as:
<Company>.(<Product>|<Technology>)[.<Feature>][.<Subnamespace>]
For example, Microsoft.WindowsMobile.DirectX.
2.Is it beneficial to break apart the business and data layers into multiple assemblies?
I definitely think its beneficial to break apart the business and data layers into multiple assemblies. However, in my solution, I've create just two assemblies (DataLayer and BusinessLayer). The other details like Interfaces, Workflows, etc I would create directories for under each assembly. I dont think you need to split them up at that level.
3.Is it beneficial to have the interfaces and abstract classes for each layer in their own assemblies?
Kind of goes along with the above comments.
4.Is it beneficial to have an "Entities" assembly for both the business and data layers?
Yes. I would say that your data entities might not map directly to what your business model will be. When storing the data to a database or other medium, you might need to change things around to have it play nice. The entities that you expose to your service layer should be useable for the UI. The entities you use for you Data Access Layer should be useable for you storage medium. AutoMapper is definitely your friend and can help with mapping as you mentioned. So this is how it shapes up:
(source: microsoft.com)
1) The naming is absolutely fine, just as SwDevMan81 stated.
2) Absolutely, If WCF gets outdated in a few years, you'll only have to change your DAL.
3) The rule of thumb is to ask yourself this simple question: "Can I think of a case where I will make smart use of this?".
When talking about your WCF contracts, yes, definitely put those in a separate assembly: it is key to a good WCF design (I'll go into more details).
When talking about an interface defined in AssemblyA, and is implemented in AssemblyB, then the properties/methods described in those interfaces are used in AssemblyC, you are fine as long as every class defined in AssemblyB is used in C through an interface. Otherwise, you'll have to reference both A, and B: you lose.
4) The only reason I can think of to actually move around 3 times the same looking object, is bad design: the database relations were poorly crafted, and thus you have to tweak the objects that come out to have something you can work with.
If you redo the architecture, you can have another assembly, used in pretty much every project, called "Entities" that holds the data objects. By every project i meant WCF as well.
On a side note, I would add that the WCF service should be split into 3 assemblies: the ServiceContracts, the Service itself, and the Entities we talked about. I had a good video on that last point, but it's at work, i'll link it tomorow!
HTH,
bab.
EDIT: here is the video.
Where should I place the Validation logic of the Domain objects in my solution? Should I put them in the Domain classes, Business layer or else?
I would also like to make use of Validation Application Block and Policy Injection Application Block from Microsoft Enterprise Library for this.
What validation strategy should I be using to fit all these together nicely?
Thanks all in advance!
It depends. First - You need to understand what You are validating.
You might validate:
that value You retrieve from Http post can be parsed as date time,
that Customer.Name is not larger than 100 symbols,
that Customer has enough money to purchase stuff.
As You can see - these validations are different in nature, so they should be separated. Importance of them varies too (see "All rules aren’t created equal" paragraph).
Thing You might want to consider is not allowing domain object to be in invalid state.
That will greatly reduce complexity because at current time frame, You know that object is valid and You need to validate only current task related things in order to advance.
Also, You should consider avoiding usage of tools in Your domain model because it should be infrastructure free as much as possible.
Another thing - embrace usage of value objects. Those are great for validation encapsulation.
You can do either, depending on your needs.
Putting it in domain classes makes sure the validation is always done, but can make the classes bloated. It also can go against the single responsibility principle depending on how you interpret that (it adds the responsibility to validate). Putting it in domain classes also restricts you to one kind of validation. Also, unless you use inheritance, the same rule might have to be implemented multiple times in related classes (DRY). Validation is spread out through your domain if you do it this way.
External validation (you can get a validation object through DI, factories, business layer, or context) makes sure you can swap out the validation rules depending on context (e.g. for a long running process you want to save in a partially finished state you could have one validation object just to be able to save, and another to check whether the domain class is really valid and ready to be used). Your domain classes will be simpler (less responsibilities, though you'd have to do minimal checks, like null checks, to prevent run time errors), and you could reuse rule sets for related classes as well. Validation is centred in a small area of your domain model in this way. B.t.w. you can inject the external validation into the domain class itself making sure the classes do validate themselves, just don't know what they are validating.
Can't comment on the validation application block though.As always you have to weigh the pros versus the cons, there is never one valid solution.
First off, I agree with #i8abug.
But I did want to go a bit further to talk architecture. Every one of those design architectures, like domain driven, should be taken as nothing more than a suggestion and viewed with scrutiny.
At every step you should ask yourself what the benefit and drawbacks of the point in question is with regards to your application.
A lot of these involve adding a tremendous amount of code and seriously complicating projects with very little benefit.
The validation point is a prime example. As Stefan said, the principle of single responsibility basically says you need to create a whole set of other classes whose purpose is to only validate the state of the original objects. Obviously this adds a LOT of code to the app. Maybe it's generated for you, maybe you have to hand write it. Regardless, more code generally equates to being less robust and certainly equates to being harder to understand.
The benefit of separating all of that is that you can swap out validation rules. Ok, fine. The drawback is that you now have 2 files to look at and create for each class definition. ie: more work. Does your app need to swap out validation rules? Probably not. I'd even wager to say very very few do.
Quite frankly, if you go down this path then you may as well define everything as a struct and let all of those "helper" classes creep back to take care of validation, persistence, setting properties, etc as being a full blown class buys you almost nothing.
All of that said, I tend towards self contained classes. In other words they know how their properties relate to each other and know what are acceptable values. They can also perform operations on themselves and their children. In other words, they know what they are. This tends to lead to simplified coding and implementation. It also leads to knowing exactly where to go for a modification or change. The only separation I really do here is to implement Inversion of Control for persistence; which allows me to swap out data providers at runtime; which has been a requirement on several applications I've done.
Point is, think through what you are doing and decide if it's really the best way to go in your particular situation. All of these programming "rules" are just suggestions after all.
I generally put it in the domain objects. This is because the domain objects are the things that I am concerned about validating so if a rule for a specific object changes, I know where to update it rather than having to search through a bunch of unrelated entity rules in some specific validation class/file.
I realize this may not be considered POCO but every project has specific exceptions and this one often makes sense to me. Likewise, in some projects it makes sense to have your domain entities referenced from the views and, therefore, implement IPropertyChanged rather than constantly copying values from entities to a whole other set of view specific objects.
The old way I did validation was I had an IValidator interface like below which each entity implemented.
public interface IValidator
{
IList<RuleViolation> GetViolations();
}
Now I do this using NHibernate Validation (don't need to use nhibernate ORM to take advantage of the validation library. It is done simply through attributes.
//I can't remember the exact syntax but it is very similar to this
public class MyEntity
{
[NHibernateValidation(Length(min=1, max=10)]
public String Name {get;set;}
}
//... and then later ...
NHibernateValidator.Validate(myEntity);
Edit: I removed my comment about not being a huge fan of enterprise library in general in the past since Chris informed me that it is now very similar to NHibernate Validation