Model.Is___ - Should it be a Property or a Method? - c#

As I design the models for a domain, they almost always end up having some .IsSomething functionality on them. IsNew and IsDirty are common for data persistence purposes, IsValid for business rule validation, even IsFraudulent in a current project (more business rule validation), etc. Whenever I see these implemented by others, they are almost invariably done so as methods. But I find myself wondering if there's a particular reason for that.
I tend to see properties as describing an object and methods as performing some kind of action. These don't really perform an action. They involve code because they're dynamically determined when called, and they're clearly read-only, but to me they still fit as properties rather than methods.
There could potentially be a serialization issue with properties, I suppose. Though a rich domain model tends not to serialize well anyway given that it contains logic and functionality, so any time I need to move something across a service boundary I generally flatten it into a defined DTO structure first anyway.
But I wonder if anybody else has any insight on the subject? Is there a good reason to implement these as methods rather than as properties?
(Tangentially related, though an answer has already been given, extension properties would really help with consistency on something like this. I have a number of IsSomething() extension methods, usually on System.String, for implementing domain-specific logic. But even if properties are the way to go, I may want to stick with methods just for consistency with the extensions.)

Assuming that accessing the property:
Has no side-effects
Is "reasonably speedy" (yeah, very woolly...)
then I see no reason not to make it a property. The serialization shouldn't be an issue - most serialization schemes provide ways of marking a property as transient (i.e. not-to-be-serialized).

I would use a property because:
It describes the object in some way, so conceptually its characteristic, its property
It does not ask for any parameters
It basically just retrieves certain data, not performs any standalone actions or modifications

Related

Where do you put Validation in projects with Domain Driven Design?

Where should I place the Validation logic of the Domain objects in my solution? Should I put them in the Domain classes, Business layer or else?
I would also like to make use of Validation Application Block and Policy Injection Application Block from Microsoft Enterprise Library for this.
What validation strategy should I be using to fit all these together nicely?
Thanks all in advance!
It depends. First - You need to understand what You are validating.
You might validate:
that value You retrieve from Http post can be parsed as date time,
that Customer.Name is not larger than 100 symbols,
that Customer has enough money to purchase stuff.
As You can see - these validations are different in nature, so they should be separated. Importance of them varies too (see "All rules aren’t created equal" paragraph).
Thing You might want to consider is not allowing domain object to be in invalid state.
That will greatly reduce complexity because at current time frame, You know that object is valid and You need to validate only current task related things in order to advance.
Also, You should consider avoiding usage of tools in Your domain model because it should be infrastructure free as much as possible.
Another thing - embrace usage of value objects. Those are great for validation encapsulation.
You can do either, depending on your needs.
Putting it in domain classes makes sure the validation is always done, but can make the classes bloated. It also can go against the single responsibility principle depending on how you interpret that (it adds the responsibility to validate). Putting it in domain classes also restricts you to one kind of validation. Also, unless you use inheritance, the same rule might have to be implemented multiple times in related classes (DRY). Validation is spread out through your domain if you do it this way.
External validation (you can get a validation object through DI, factories, business layer, or context) makes sure you can swap out the validation rules depending on context (e.g. for a long running process you want to save in a partially finished state you could have one validation object just to be able to save, and another to check whether the domain class is really valid and ready to be used). Your domain classes will be simpler (less responsibilities, though you'd have to do minimal checks, like null checks, to prevent run time errors), and you could reuse rule sets for related classes as well. Validation is centred in a small area of your domain model in this way. B.t.w. you can inject the external validation into the domain class itself making sure the classes do validate themselves, just don't know what they are validating.
Can't comment on the validation application block though.As always you have to weigh the pros versus the cons, there is never one valid solution.
First off, I agree with #i8abug.
But I did want to go a bit further to talk architecture. Every one of those design architectures, like domain driven, should be taken as nothing more than a suggestion and viewed with scrutiny.
At every step you should ask yourself what the benefit and drawbacks of the point in question is with regards to your application.
A lot of these involve adding a tremendous amount of code and seriously complicating projects with very little benefit.
The validation point is a prime example. As Stefan said, the principle of single responsibility basically says you need to create a whole set of other classes whose purpose is to only validate the state of the original objects. Obviously this adds a LOT of code to the app. Maybe it's generated for you, maybe you have to hand write it. Regardless, more code generally equates to being less robust and certainly equates to being harder to understand.
The benefit of separating all of that is that you can swap out validation rules. Ok, fine. The drawback is that you now have 2 files to look at and create for each class definition. ie: more work. Does your app need to swap out validation rules? Probably not. I'd even wager to say very very few do.
Quite frankly, if you go down this path then you may as well define everything as a struct and let all of those "helper" classes creep back to take care of validation, persistence, setting properties, etc as being a full blown class buys you almost nothing.
All of that said, I tend towards self contained classes. In other words they know how their properties relate to each other and know what are acceptable values. They can also perform operations on themselves and their children. In other words, they know what they are. This tends to lead to simplified coding and implementation. It also leads to knowing exactly where to go for a modification or change. The only separation I really do here is to implement Inversion of Control for persistence; which allows me to swap out data providers at runtime; which has been a requirement on several applications I've done.
Point is, think through what you are doing and decide if it's really the best way to go in your particular situation. All of these programming "rules" are just suggestions after all.
I generally put it in the domain objects. This is because the domain objects are the things that I am concerned about validating so if a rule for a specific object changes, I know where to update it rather than having to search through a bunch of unrelated entity rules in some specific validation class/file.
I realize this may not be considered POCO but every project has specific exceptions and this one often makes sense to me. Likewise, in some projects it makes sense to have your domain entities referenced from the views and, therefore, implement IPropertyChanged rather than constantly copying values from entities to a whole other set of view specific objects.
The old way I did validation was I had an IValidator interface like below which each entity implemented.
public interface IValidator
{
IList<RuleViolation> GetViolations();
}
Now I do this using NHibernate Validation (don't need to use nhibernate ORM to take advantage of the validation library. It is done simply through attributes.
//I can't remember the exact syntax but it is very similar to this
public class MyEntity
{
[NHibernateValidation(Length(min=1, max=10)]
public String Name {get;set;}
}
//... and then later ...
NHibernateValidator.Validate(myEntity);
Edit: I removed my comment about not being a huge fan of enterprise library in general in the past since Chris informed me that it is now very similar to NHibernate Validation

Using IDataErrorInfo or any similar pattern for propagating error messages

I apologise for this question as it is rather fuzzy and there are several questions integrated but as they are so closely related I did not want to break them apart into several submissions.
I am currently thinking about how to test for configuration errors in an application. There are different options available and one which has been used before is the IDataErrorInfo interface. I am not extremely happy with how this implementation looks, not because it doesn’t work because it do, just that I don’t fully agree with the actual implementation. I have been searching around this site (all 52 related questions) and others to see why Microsoft decided that using the keyword “this” with a index would be a good idea. It is typically used for indexing items in a collection, and even tough one could consider the classes I implement as a collection of errors I do not really agree with that the “this[]” keyword should implicitly be used for testing for them. (Side note: Does this mean that a custom collection class cannot have configuration errors of its own?) Why is this not a method call like “TestErrorState( string propertyname )” or even an indexed a property? And how often is the “string Error { get; }” actually used? To me it looks like kind of a “hack” and not really easy to use.
One of the practical issues I do have due to this implementation is that I have objects relating to other objects and I would like the error states to propagate. This turns out ugly as the class which is on display in the user interface should be in an “error state” due to a related object which is not necessarily shown to the user (unless the user clicks an option on the interface and moves “down” one level in the hierarchy of objects). This means that I need to extend the test for error modes with my own methods for propagating these errors and then I start questioning whether I should just implement something completely different and not go for the IDataErrorInfo interface at all.
Please let me know where I can find some good information about why IDataErrorInfo is the way it is. And if you can provide me with a nice idea for how to have error modes that propagate through a hierarchy of objects that would be just brilliant! When I say propagate I don’t mean as an exception as this feels like an event, it is just that when an object is asked for configuration errors it should also ask all its children for errors and then pass on also the children’s error messages.
The members of the IDataErrorInfo interface typically have no meaning outside of the interface itself, because you hardly ever want to request the validation errors of a entity this way yourself. The IDataErrorInfo interface is meant to be consumed by UI technologies such as MVC and WPF.
Because there is no need to call the Error property or the this[string] directly the IDataErrorInfo members could typically be implemented explicitly. This prevents them from showing up on the class itself and allows you to implement your own -more useful- this[] indexer yourself.
I agree that having an indexer on that interface probably wasn't the best design, but IDataErrorInfo is probably designed with explicit implementation in mind anyway, so it doesn't really matter.
Here is an example of how to implement IDataErrorInfo explicitly.
I am not sure if you are using MVVM but it looks like it. I agree that this[] puts extra unnecessary constraint on the implementation while it could have been a GetError(string propetyName). I think this is a COM hangover and this interface is exposed through COM but will check and confirm.
In practice, I have never found this[] causing any problem. This would almost never be implemented to collection since binding for collections would usually use ObservableCollection and only individual items will implemented IDataErrorInfo.
If you have a hierarchy (e.g. Customer and Orders), you would typically implement a view model for both and order view model would have a reference to its parent customer view model so that it can propagate the error. Scenarios I have had, this approach always worked. If you have a specific problem, post a new question or update yours.

Is it a good practice to implement logic in properties

we use ASP.NET with C# and based on open source projects/articles I passed through, I found many properties were including a logic but when I did so the team-leader told me it's not good at all to place logic inside properties but to call the logic through methods...
is that really bad? and why not to use logic in the properties?
thanks,
Property access is expected to be instantaneous (no long waits), consistent (no changing values), and safe (no exceptions). If you can make those guarantees, I think putting logic in properties is OK.
It's fine to have some logic in properties. For example, argument validation in setters and lazy computation in getters are both fairly common.
It's usually a bad idea for a property access to do something expensive such as a database call, however. Developers tend to assume that properties are reasonably cheap to evaluate.
It's a judgement call in the end - but I certainly reject the suggestion that properties should only ever be trivial to the extent that they could be implemented with automatic properties.
Properties are methods. They are just short-cuts for getter/setters. Any logic that would be valid in a getter/setter is reasonable to put in a property. Any logic that you would normally not put in a getter/setter would be inappropriate to put in a property. Generally speaking, if you (as a consumer of the class) couldn't reaonsably expect that setting a property value, or even worse, getting a property value might cause a behavior to take place, then that logic probably belongs elsewhere. In other words, the logic should be related and consistent with getting or setting the property.
Quoting from the linked article above:
Properties are members that provide a
flexible mechanism to read, write, or
compute the values of private fields.
Properties can be used as though they
are public data members, but they are
actually special methods called
accessors. This enables data to be
accessed easily while still providing
the safety and flexibility of methods.
A common answer applies here: It Depends.
Generally, it is not a good idea to implement business logic in getters and setters. If your object is a simple DTO (data transfer object) this would violate Single Responsibility.
However, state-tracking logic and other housekeeping is often found in properties. For example, Entity Framework 4 self-tracking entities have state management logic in every primitive property setter to allow for tracking.
An alternative to logic in properties is Aspect-Oriented Programming (AOP.) Using AOP, you can "inject" logic between objects and the hosting process. Access to objects can be "intercepted" and handled conditionally.
Placing business logic in a setter can get you in trouble if you ever have the need to serialize/deserialize your objects with JSon, XML or an ORM. An example of this may be when using a NoSql datastore like a document database or an ORM. Some of these (e.g. NHibernate) can be configured to access backing fields instead of the setter.
I find that using a public Getter and Private setter along with a method to set the value with additional logic as required is a good approach. Most serializers can access the private setter so what you end up with is an accurate representation of the persisted object without accidentally firing logic that could potentially change values incorrectly when deserialized.
However, if you don't think there will ever be a need to serialize/deserialize then this shouldnt be an issue.
In my opinion this is absolutely ok. The way I see it, the only justification for having properties as a language feature in the first place is that you can have logic in them. Otherwise you may as well just allow direct access to the underlying data members.
Usually, a property only affect 1 variable since it was made mainly for that purpose. But sometime, you want a more high level property that isn't just a 1-to-1 variable. So, in this case, it's normal that it will contains code. But you have to keep in mind that a property is not intended to be used like a function. When you call a function, you know that it will do some processing. When you call a property, you expect it to be fast.
But finally, it's a question of preferences, and like coding standard, following what your superior is telling you is at your discretion. So it's not bad and depends on your judgment.
In my opinion business logic is allowed in Setter/Getter only in certain situations. For exaple: it's alowed to put logic that's responsible for validating the input, becase setters are responsible for maintainging object state, so that state should not be violated. So you should cut that business logic to smallest portion of code that is responsible only for one subject.
The other thing is that your class should be (in best situation) POCO. Why? Because it should be reusable and when class contains logic in Properties reusability can be simply blocked. Think that you have SqlServerPerson with some SQLServer validation in properties then it can be hard to replace it with for example NHibernatePerson when you change the ORM/DB access.

Are There Reasons To Not Use CustomAttributes?

This is mostly a request for comments if there is a reason I should not go down this road.
I have a multi-tierd, CodeSmith generated application. At the UI level there need to be some fields that are required, and the required fields will vary depending on field values in the bound entity. What I am thinking of doing is adding a "PropertyRequired" CustomAttribute to each property in the entities that I can set true or false when I load the entity in its manager. Then I will use Reflection to query the property and give visual feedback to the user at the UI level, and I can validate that all the required properties have a valid value in the manager before I save. I've worked this out as a proof of concept with one property in one entity, but before I try to extend it to the rest of the application I'd like to ask if there is someone with more experience to either tell me to go for it, or why I won't like it when I scale up. If this is a bad idea, or if you can suggest a better approach please offer your opinion.
It is a pretty reasonable way to do it (I've done something very similar before) - but there are always downsides:
any code needing the entity will need the extra reference (assuming that the attribute and entity are in different assemblies)
the values (unless you are clever about it) must be determined at compile-time
you can't use it on entities outside of your control
In most cases the above aren't a problem. If they are an issue, you might want to support an external metadata model - but unless you need it, this would be overkill. Don't do it unless you must (meaning: go ahead and use attributes; they are usually fine).
There is no inherent reason to avoid custom attributes. It is a supported CLR feature which is the backbone for many available products (Code Contracts, FxCop, etc ...).
This is not an unreasonable approach and healthier than baking this stuff into a UI tier. There are a couple of points worth considering before taking the full dive:
You are tightly coupling business logic with the business entity itself. Are there circumstances where a field being required or valid values could change? You may be limiting yourself or be faced with an inconsistent validation mechanism
Dynamic assignment is possible but more tricky - i.e. when you set a field to be required thats what it will be unless you override
Custom attributes can be quite inflexible if further down the line you wanted to do something more complicated - namely if you need to pass state into an attribute driven validation scheme. Attributes like declarative assignment. Only having a true/false required property shouldn't be an issue here though
Just being a devils advocate really, in general for a fairly simple application where you only care about required fields, this is quite a tidy way of doing it

How to deal with unstable 3rd party object tree (sorry, I can’t come up with a better title)?

Let’s say I have to use an unstable assembly that I cannot refractor. My code is supposed to be the client of a CustomerCollection that is (surprise) a collection of Customer instances. Each customer instance has further properties like a collection of Order instances. I hope you get the idea.
Since the assembly behaves not that well my approach is to wrap each class in a façade where I can deal with exceptions and workarounds an all that stuff. (To make things more complicated I like to design the wrapper to be usable with WPF regarding data binding.)
So my question is about the design of the wrapper, e.g. CustomerCollectionFacade. How to expose the object tree (customers, orders, properties of orders)? Is the CustomerWrapper collection stored in a field or do I create CustomerWrapper instances on the fly (in the get accessor of a property maybe)?
Any ideas welcome. Thanks!
Edit:
Unfortunately the way proposed by krosenvold is not an option in my case. Since the object tree’s behavior is very interactive (editing from multiple views, events fired if properties change) I will not opt to abandon the ‘source object’. These changes are supposed to propagate to the source. Thanks anyway.
I generally try to isolate such transformations into one or more adapter classes and let them do the whole story at once. This is a god idea because it is easily testable, all the conversion logic ends up in one place, and you avoid littering the conversion logic "all over the place".
Sometimes there is state in the underlying (source) object that is going to be needed when/if you're updating the object. You might not be exposing this data in your cleaned-up api, so it's going to have to be hidden somewhere.
If you choose to encapsulate the original object there's always the chance that someone'll break that encapsulation sometime in the future and start leaking the gory details of the underlying object. That reason alone is usually enough for me to not keep a reference to the original instance, since I actually understand what I'm doing six months later when I'm in a hurry. But if you keep it somewhere else you'll need lifecycle management for the originals, so I usually end up stashing it away in some secret interface on the "clean" object.

Categories

Resources