Is it a good practice to implement logic in properties - c#

we use ASP.NET with C# and based on open source projects/articles I passed through, I found many properties were including a logic but when I did so the team-leader told me it's not good at all to place logic inside properties but to call the logic through methods...
is that really bad? and why not to use logic in the properties?
thanks,

Property access is expected to be instantaneous (no long waits), consistent (no changing values), and safe (no exceptions). If you can make those guarantees, I think putting logic in properties is OK.

It's fine to have some logic in properties. For example, argument validation in setters and lazy computation in getters are both fairly common.
It's usually a bad idea for a property access to do something expensive such as a database call, however. Developers tend to assume that properties are reasonably cheap to evaluate.
It's a judgement call in the end - but I certainly reject the suggestion that properties should only ever be trivial to the extent that they could be implemented with automatic properties.

Properties are methods. They are just short-cuts for getter/setters. Any logic that would be valid in a getter/setter is reasonable to put in a property. Any logic that you would normally not put in a getter/setter would be inappropriate to put in a property. Generally speaking, if you (as a consumer of the class) couldn't reaonsably expect that setting a property value, or even worse, getting a property value might cause a behavior to take place, then that logic probably belongs elsewhere. In other words, the logic should be related and consistent with getting or setting the property.
Quoting from the linked article above:
Properties are members that provide a
flexible mechanism to read, write, or
compute the values of private fields.
Properties can be used as though they
are public data members, but they are
actually special methods called
accessors. This enables data to be
accessed easily while still providing
the safety and flexibility of methods.

A common answer applies here: It Depends.
Generally, it is not a good idea to implement business logic in getters and setters. If your object is a simple DTO (data transfer object) this would violate Single Responsibility.
However, state-tracking logic and other housekeeping is often found in properties. For example, Entity Framework 4 self-tracking entities have state management logic in every primitive property setter to allow for tracking.
An alternative to logic in properties is Aspect-Oriented Programming (AOP.) Using AOP, you can "inject" logic between objects and the hosting process. Access to objects can be "intercepted" and handled conditionally.

Placing business logic in a setter can get you in trouble if you ever have the need to serialize/deserialize your objects with JSon, XML or an ORM. An example of this may be when using a NoSql datastore like a document database or an ORM. Some of these (e.g. NHibernate) can be configured to access backing fields instead of the setter.
I find that using a public Getter and Private setter along with a method to set the value with additional logic as required is a good approach. Most serializers can access the private setter so what you end up with is an accurate representation of the persisted object without accidentally firing logic that could potentially change values incorrectly when deserialized.
However, if you don't think there will ever be a need to serialize/deserialize then this shouldnt be an issue.

In my opinion this is absolutely ok. The way I see it, the only justification for having properties as a language feature in the first place is that you can have logic in them. Otherwise you may as well just allow direct access to the underlying data members.

Usually, a property only affect 1 variable since it was made mainly for that purpose. But sometime, you want a more high level property that isn't just a 1-to-1 variable. So, in this case, it's normal that it will contains code. But you have to keep in mind that a property is not intended to be used like a function. When you call a function, you know that it will do some processing. When you call a property, you expect it to be fast.
But finally, it's a question of preferences, and like coding standard, following what your superior is telling you is at your discretion. So it's not bad and depends on your judgment.

In my opinion business logic is allowed in Setter/Getter only in certain situations. For exaple: it's alowed to put logic that's responsible for validating the input, becase setters are responsible for maintainging object state, so that state should not be violated. So you should cut that business logic to smallest portion of code that is responsible only for one subject.
The other thing is that your class should be (in best situation) POCO. Why? Because it should be reusable and when class contains logic in Properties reusability can be simply blocked. Think that you have SqlServerPerson with some SQLServer validation in properties then it can be hard to replace it with for example NHibernatePerson when you change the ORM/DB access.

Related

Model.Is___ - Should it be a Property or a Method?

As I design the models for a domain, they almost always end up having some .IsSomething functionality on them. IsNew and IsDirty are common for data persistence purposes, IsValid for business rule validation, even IsFraudulent in a current project (more business rule validation), etc. Whenever I see these implemented by others, they are almost invariably done so as methods. But I find myself wondering if there's a particular reason for that.
I tend to see properties as describing an object and methods as performing some kind of action. These don't really perform an action. They involve code because they're dynamically determined when called, and they're clearly read-only, but to me they still fit as properties rather than methods.
There could potentially be a serialization issue with properties, I suppose. Though a rich domain model tends not to serialize well anyway given that it contains logic and functionality, so any time I need to move something across a service boundary I generally flatten it into a defined DTO structure first anyway.
But I wonder if anybody else has any insight on the subject? Is there a good reason to implement these as methods rather than as properties?
(Tangentially related, though an answer has already been given, extension properties would really help with consistency on something like this. I have a number of IsSomething() extension methods, usually on System.String, for implementing domain-specific logic. But even if properties are the way to go, I may want to stick with methods just for consistency with the extensions.)
Assuming that accessing the property:
Has no side-effects
Is "reasonably speedy" (yeah, very woolly...)
then I see no reason not to make it a property. The serialization shouldn't be an issue - most serialization schemes provide ways of marking a property as transient (i.e. not-to-be-serialized).
I would use a property because:
It describes the object in some way, so conceptually its characteristic, its property
It does not ask for any parameters
It basically just retrieves certain data, not performs any standalone actions or modifications

Where do you put Validation in projects with Domain Driven Design?

Where should I place the Validation logic of the Domain objects in my solution? Should I put them in the Domain classes, Business layer or else?
I would also like to make use of Validation Application Block and Policy Injection Application Block from Microsoft Enterprise Library for this.
What validation strategy should I be using to fit all these together nicely?
Thanks all in advance!
It depends. First - You need to understand what You are validating.
You might validate:
that value You retrieve from Http post can be parsed as date time,
that Customer.Name is not larger than 100 symbols,
that Customer has enough money to purchase stuff.
As You can see - these validations are different in nature, so they should be separated. Importance of them varies too (see "All rules aren’t created equal" paragraph).
Thing You might want to consider is not allowing domain object to be in invalid state.
That will greatly reduce complexity because at current time frame, You know that object is valid and You need to validate only current task related things in order to advance.
Also, You should consider avoiding usage of tools in Your domain model because it should be infrastructure free as much as possible.
Another thing - embrace usage of value objects. Those are great for validation encapsulation.
You can do either, depending on your needs.
Putting it in domain classes makes sure the validation is always done, but can make the classes bloated. It also can go against the single responsibility principle depending on how you interpret that (it adds the responsibility to validate). Putting it in domain classes also restricts you to one kind of validation. Also, unless you use inheritance, the same rule might have to be implemented multiple times in related classes (DRY). Validation is spread out through your domain if you do it this way.
External validation (you can get a validation object through DI, factories, business layer, or context) makes sure you can swap out the validation rules depending on context (e.g. for a long running process you want to save in a partially finished state you could have one validation object just to be able to save, and another to check whether the domain class is really valid and ready to be used). Your domain classes will be simpler (less responsibilities, though you'd have to do minimal checks, like null checks, to prevent run time errors), and you could reuse rule sets for related classes as well. Validation is centred in a small area of your domain model in this way. B.t.w. you can inject the external validation into the domain class itself making sure the classes do validate themselves, just don't know what they are validating.
Can't comment on the validation application block though.As always you have to weigh the pros versus the cons, there is never one valid solution.
First off, I agree with #i8abug.
But I did want to go a bit further to talk architecture. Every one of those design architectures, like domain driven, should be taken as nothing more than a suggestion and viewed with scrutiny.
At every step you should ask yourself what the benefit and drawbacks of the point in question is with regards to your application.
A lot of these involve adding a tremendous amount of code and seriously complicating projects with very little benefit.
The validation point is a prime example. As Stefan said, the principle of single responsibility basically says you need to create a whole set of other classes whose purpose is to only validate the state of the original objects. Obviously this adds a LOT of code to the app. Maybe it's generated for you, maybe you have to hand write it. Regardless, more code generally equates to being less robust and certainly equates to being harder to understand.
The benefit of separating all of that is that you can swap out validation rules. Ok, fine. The drawback is that you now have 2 files to look at and create for each class definition. ie: more work. Does your app need to swap out validation rules? Probably not. I'd even wager to say very very few do.
Quite frankly, if you go down this path then you may as well define everything as a struct and let all of those "helper" classes creep back to take care of validation, persistence, setting properties, etc as being a full blown class buys you almost nothing.
All of that said, I tend towards self contained classes. In other words they know how their properties relate to each other and know what are acceptable values. They can also perform operations on themselves and their children. In other words, they know what they are. This tends to lead to simplified coding and implementation. It also leads to knowing exactly where to go for a modification or change. The only separation I really do here is to implement Inversion of Control for persistence; which allows me to swap out data providers at runtime; which has been a requirement on several applications I've done.
Point is, think through what you are doing and decide if it's really the best way to go in your particular situation. All of these programming "rules" are just suggestions after all.
I generally put it in the domain objects. This is because the domain objects are the things that I am concerned about validating so if a rule for a specific object changes, I know where to update it rather than having to search through a bunch of unrelated entity rules in some specific validation class/file.
I realize this may not be considered POCO but every project has specific exceptions and this one often makes sense to me. Likewise, in some projects it makes sense to have your domain entities referenced from the views and, therefore, implement IPropertyChanged rather than constantly copying values from entities to a whole other set of view specific objects.
The old way I did validation was I had an IValidator interface like below which each entity implemented.
public interface IValidator
{
IList<RuleViolation> GetViolations();
}
Now I do this using NHibernate Validation (don't need to use nhibernate ORM to take advantage of the validation library. It is done simply through attributes.
//I can't remember the exact syntax but it is very similar to this
public class MyEntity
{
[NHibernateValidation(Length(min=1, max=10)]
public String Name {get;set;}
}
//... and then later ...
NHibernateValidator.Validate(myEntity);
Edit: I removed my comment about not being a huge fan of enterprise library in general in the past since Chris informed me that it is now very similar to NHibernate Validation

Are protected members/fields really that bad?

Now if you read the naming conventions in the MSDN for C# you will notice that it states that properties are always preferred over public and protected fields. I have even been told by some people that you should never use public or protected fields. Now I will agree I have yet to find a reason in which I need to have a public field but are protected fields really that bad?
I can see it if you need to make sure that certain validation checks are performed when getting/setting the value however a lot of the time it seems like just extra overhead in my opinion. I mean lets say I have a class GameItem with fields for baseName, prefixName, and suffixName. Why should I take the overhead of both creating the properties (C#) or accessor methods and the performance hit I would occur (if I do this for every single field in an application, I am sure that it would adds up at less a little especially in certain languages like PHP or certain applications with performance is critical like games)?
Are protected members/fields really that bad?
No. They are way, way worse.
As soon as a member is more accessible than private, you are making guarantees to other classes about how that member will behave. Since a field is totally uncontrolled, putting it "out in the wild" opens your class and classes that inherit from or interact with your class to higher bug risk. There is no way to know when a field changes, no way to control who or what changes it.
If now, or at some point in the future, any of your code ever depends on a field some certain value, you now have to add validity checks and fallback logic in case it's not the expected value - every place you use it. That's a huge amount of wasted effort when you could've just made it a damn property instead ;)
The best way to share information with deriving classes is the read-only property:
protected object MyProperty { get; }
If you absolutely have to make it read/write, don't. If you really, really have to make it read-write, rethink your design. If you still need it to be read-write, apologize to your colleagues and don't do it again :)
A lot of developers believe - and will tell you - that this is overly strict. And it's true that you can get by just fine without being this strict. But taking this approach will help you go from just getting by to remarkably robust software. You'll spend far less time fixing bugs.
And regarding any concerns about performance - don't. I guarantee you will never, in your entire career, write code so fast that the bottleneck is the call stack itself.
OK, downvote time.
First of all, properties will never hurt performance (provided they don't do much). That's what everyone else says, and I agree.
Another point is that properties are good in that you can place breakpoints in them to capture getting/setting events and find out where they come from.
The rest of the arguments bother me in this way:
They sound like "argument by prestige". If MSDN says it, or some famous developer or author whom everybody likes says it, it must be so.
They are based on the idea that data structures have lots of inconsistent states, and must be protected against wandering or being placed into those states. Since (it seems to me) data structures are way over-emphasized in current teaching, then typically they do need those protections. Far more preferable is to minimize data structure so that it tends to be normalized and not to have inconsistent states. Then, if a member of a class is changed, it is simply changed, rather than damaged. After all, somehow lots of good software was/is written in C, and that didn't suffer massively from lack of protections.
They are based on defensive coding carried to extremes. It is based on the idea that your classes will be used in a world where nobody else's code can be trusted not to goose your stuff. I'm sure there are situations where this is true, but I've never seen them. What I have seen is situations where things were made horribly complicated to get around protections for which there was no need, and to try to guard the consistency of data structures that were horribly over-complicated and un-normalized.
Regarding fields vs. properties, I can think of two reasons for prefering properties in the public interface (protected is also public in the sense that someone else than just your class can see it).
Exposing properties gives you a way to hide the implementation. It also allows you to change the implementation without changing the code that uses it (e.g. if you decide to change the way data are stored in the class)
Many tools that work with classes using reflection only focus on properties (for example, I think that some libraries for serialization work this way). Using properties consistently makes it easier to use these standard .NET tools.
Regarding overheads:
If the getter/setter is the usual one line piece of code that simply reads/sets the value of a field, then the JIT should be able to inline the call, so there is no performance overhad.
Syntactical overhead is largely reduced when you're using automatically implemented properties (C# 3.0 and newer), so I don't think this is an issue:
protected int SomeProperty { get; set; }
In fact, this allows you to make for example set protected and get public very easily, so this can be even more elegant than using fields.
Public and/or protected fields are bad because they can be manipulated from outside the declaring class without validation; thus they can be said to break the encapsulation principle of object oriented programming.
When you lose encapsulation, you lose the contract of the declaring class; you cannot guarantee that the class behaves as intended or expected.
Using a property or a method to access the field enables you to maintain encapsulation, and fulfill the contract of the declaring class.
I agree with the read-only property answer. But to play devil's advocate here, it really depends on what you're doing. I'll be happy to admit i write code with public members all the time (i also don't comment, follow guidelines, or any of the formalities).
But when i'm at work that's a different story.
It actually depends on if your class is a data class or a behaviour class.
If you keep your behaviour and data separate, it is fine to expose the data of your data classes, as long as they have no behaviour.
If the class is a behaviour class, then it should not expose any data.

Are There Reasons To Not Use CustomAttributes?

This is mostly a request for comments if there is a reason I should not go down this road.
I have a multi-tierd, CodeSmith generated application. At the UI level there need to be some fields that are required, and the required fields will vary depending on field values in the bound entity. What I am thinking of doing is adding a "PropertyRequired" CustomAttribute to each property in the entities that I can set true or false when I load the entity in its manager. Then I will use Reflection to query the property and give visual feedback to the user at the UI level, and I can validate that all the required properties have a valid value in the manager before I save. I've worked this out as a proof of concept with one property in one entity, but before I try to extend it to the rest of the application I'd like to ask if there is someone with more experience to either tell me to go for it, or why I won't like it when I scale up. If this is a bad idea, or if you can suggest a better approach please offer your opinion.
It is a pretty reasonable way to do it (I've done something very similar before) - but there are always downsides:
any code needing the entity will need the extra reference (assuming that the attribute and entity are in different assemblies)
the values (unless you are clever about it) must be determined at compile-time
you can't use it on entities outside of your control
In most cases the above aren't a problem. If they are an issue, you might want to support an external metadata model - but unless you need it, this would be overkill. Don't do it unless you must (meaning: go ahead and use attributes; they are usually fine).
There is no inherent reason to avoid custom attributes. It is a supported CLR feature which is the backbone for many available products (Code Contracts, FxCop, etc ...).
This is not an unreasonable approach and healthier than baking this stuff into a UI tier. There are a couple of points worth considering before taking the full dive:
You are tightly coupling business logic with the business entity itself. Are there circumstances where a field being required or valid values could change? You may be limiting yourself or be faced with an inconsistent validation mechanism
Dynamic assignment is possible but more tricky - i.e. when you set a field to be required thats what it will be unless you override
Custom attributes can be quite inflexible if further down the line you wanted to do something more complicated - namely if you need to pass state into an attribute driven validation scheme. Attributes like declarative assignment. Only having a true/false required property shouldn't be an issue here though
Just being a devils advocate really, in general for a fairly simple application where you only care about required fields, this is quite a tidy way of doing it

How to deal with unstable 3rd party object tree (sorry, I can’t come up with a better title)?

Let’s say I have to use an unstable assembly that I cannot refractor. My code is supposed to be the client of a CustomerCollection that is (surprise) a collection of Customer instances. Each customer instance has further properties like a collection of Order instances. I hope you get the idea.
Since the assembly behaves not that well my approach is to wrap each class in a façade where I can deal with exceptions and workarounds an all that stuff. (To make things more complicated I like to design the wrapper to be usable with WPF regarding data binding.)
So my question is about the design of the wrapper, e.g. CustomerCollectionFacade. How to expose the object tree (customers, orders, properties of orders)? Is the CustomerWrapper collection stored in a field or do I create CustomerWrapper instances on the fly (in the get accessor of a property maybe)?
Any ideas welcome. Thanks!
Edit:
Unfortunately the way proposed by krosenvold is not an option in my case. Since the object tree’s behavior is very interactive (editing from multiple views, events fired if properties change) I will not opt to abandon the ‘source object’. These changes are supposed to propagate to the source. Thanks anyway.
I generally try to isolate such transformations into one or more adapter classes and let them do the whole story at once. This is a god idea because it is easily testable, all the conversion logic ends up in one place, and you avoid littering the conversion logic "all over the place".
Sometimes there is state in the underlying (source) object that is going to be needed when/if you're updating the object. You might not be exposing this data in your cleaned-up api, so it's going to have to be hidden somewhere.
If you choose to encapsulate the original object there's always the chance that someone'll break that encapsulation sometime in the future and start leaking the gory details of the underlying object. That reason alone is usually enough for me to not keep a reference to the original instance, since I actually understand what I'm doing six months later when I'm in a hurry. But if you keep it somewhere else you'll need lifecycle management for the originals, so I usually end up stashing it away in some secret interface on the "clean" object.

Categories

Resources