Object Oriented Programming: Separation of Data and Behavior - c#

Recently we had a discussion regarding Data and Behavior separation in classes. The concept of separation of Data and Behaviour is implemented by placing the Domain Model and its behavior into seperate classes.
However I am not convinced of the supposed benefits of this approach. Even though it might have been coined by a "great" (I think it is Martin Fowler, though I am not sure). I present a simple example here. Suppose I have a Person class containing data for a Person and its methods (behavior).
class Person
{
string Name;
DateTime BirthDate;
//constructor
Person(string Name, DateTime BirthDate)
{
this.Name = Name;
this.BirthDate = BirthDate;
}
int GetAge()
{
return Today - BirthDate; //for illustration only
}
}
Now, separate out the behavior and data into separate classes.
class Person
{
string Name;
DateTime BirthDate;
//constructor
Person(string Name, DateTime BirthDate)
{
this.Name = Name;
this.BirthDate = BirthDate;
}
}
class PersonService
{
Person personObject;
//constructor
PersonService(string Name, DateTime BirthDate)
{
this.personObject = new Person(Name, BirthDate);
}
//overloaded constructor
PersonService(Person personObject)
{
this.personObject = personObject;
}
int GetAge()
{
return personObject.Today - personObject.BirthDate; //for illustration only
}
}
This is supposed to be beneficial and improve flexibility and provide loose coupling. I do not see how. According to me this introduces extra coding and performance penalty, that each time we have to initialize two class objects. And I see more problems in extending this code. Consider what happens when we introduce inheritance in above case. We have to inherit both the classes
class Employee: Person
{
Double Salary;
Employee(string Name, DateTime BirthDate, Double Salary): base(Name, BirthDate)
{
this.Salary = Salary;
}
}
class EmployeeService: PersonService
{
Employee employeeObject;
//constructor
EmployeeService(string Name, DateTime BirthDate, Double Salary)
{
this.employeeObject = new Employee(Name, BirthDate, Salary);
}
//overloaded constructor
EmployeeService(Employee employeeObject)
{
this.employeeObject = employeeObject;
}
}
Note that even if we segregate out the behavior in a seperate class, we still need object of the Data class for the Behaviour class methods to work on. So in the end our Behavior class contains both the data and the behavior albeit we have the data in form of a model object.
You might say that you can add some Interfaces to the mix , so we could have IPersonService and an IEmployeeService. But I think introducing interfaces for each and every class and inherting from interfaces does not seem OK.
So then can you tell me what have I achieved by seperating out the data and behavior in above case that I could not have achieved by having them in the same class ?

I agree, the separation as you implemented is cumbersome. But there are other options. What about an ageCalculator object that has method getAge(person p)? Or person.getAge(IAgeCalculator calc). Or better yet calc.getAge(IAgeble a)
There are several benefits that accrue from separating these concerns. Assuming that you intended for your implementation to return years, what if a person / baby is only 3 months old? Do you return 0? .25? Throw an exception? What if I want the age of a dog? Age in decades or hours? What if I want the age as of a certain date? What if the person is dead? What if I want to use Martian orbit for year? Or Hebrew calander?
None of that should affect classes that consume the person interface but make no use of birthdate or age. By decoupling the age calculation from the data it consumes, you get increased flexibility and increased chance of reuse. (Maybe even calculate age of cheese and person with same code!)
As usually, optimal design will vary greatly with context. It would be a rare situation, however, that performance would influence my decision in this type of problem. Other parts of the system are likely several orders of magnitude greater factors, like the speed of light between browser and server or database retrieval or serialization. time / dollars are better spent refactoring toward simplicity and maintainability than theoretical performance concerns. To that end, I find separating data and behavior of domain models to be helpful. They are, after all, separate concerns, no?
Even with such priorities, thing are muddled. Now the class that wants the persons age has another dependency, the calc class. Ideally, fewer class dependencies are desirable. Also, who is responsible instantiating calc? Do we inject it? Create a calcFactory? Or should it be a static method? How does the decision affect testability? Has the drive toward simplicity actually increased complexity?
There seems to be a disconnect between OO's instance on combining behavior with data and the single responsibility principle. When all else fails, write it both ways and then ask a coworker, "which one is simpler?"

Actually, Martin Fowler says that in the domain model, data and behavior should be combined. Take a look at AnemicDomainModel.

I realize I am about a year late on replying to this but anyway... lol
I have separated the Behaviors out before but not in the way you have shown.
It is when you have Behaviors that should have a common interface yet allow for different (unique) implementation for different objects that separating out the behaviors makes sense.
If I was making a game, for example, some behaviors available for objects might be the ability to walk, fly, jump and so forth.
By defining Interfaces such as IWalkable, IFlyable and IJumpable and then making concrete classes based on these Interfaces it gives you great flexibility and code reuse.
For IWalkable you might have...
CannotWalk : IWalkableBehavior
LimitedWalking : IWalkableBehavior
UnlimitedWalking : IWalkableBehavior
Similar pattern for IFlyableBehavior and IJumpableBehavior.
These concrete classes would implement the behavior for CannotWalk, LimitedWalking and UnlimitedWalking.
In your concrete classes for the objects (such as an enemy) you would have a local instance of these Behaviors. For example:
IWalkableBehavior _walking = new CannotWalk();
Others might use new LimitedWalking() or new UnlimitedWalking();
When the time comes to handle the behavior of an enemy, say the AI finds the player is within a certain range of the enemy (and this could be a behavior as well say IReactsToPlayerProximity) it may then naturally attempt to move the enemy closer to "engage" the enemy.
All that is needed is for the _walking.Walk(int xdist) method to be called and it will automagically be sorted out. If the object is using CannotWalk then nothing will happen because the Walk() method would be defined as simply returning and doing nothing. If using LimitedWalking the enemy may move a very short distance toward the player and if UnlimitedWalking the enemy may move right up to the player.
I might not be explaining this very clearly but basically what I mean is to look at it the opposite way. Instead of encapsulating your object (what you are calling Data here) into the Behavior class encapsulate the Behavior into the object using Interfaces and this gives you the "loose coupling" allowing you to refine the behaviors as well as easily extend each "behavioral base" (Walking, Flying, Jumping, etc) with new implementations yet your objects themselves know no difference. They just have a Walking behavior even if that behavior is defined as CannotWalk.

Funnily enough, OOP is often described as combining data and behavior.
What you're showing here is something I consider an anti-pattern: the "anemic domain model." It does suffer from all the problems you've mentioned, and should be avoided.
Different levels of an application might have a more procedural bent, which lends themselves to a service model like you've shown, but that would usually only be at the very edge of a system. And even so, that would internally be implemented by traditional object design (data + behavior). Usually, this is just a headache.

Age in intrisic to a person (any person). Therefore it should be a part of the Person object.
hasExperienceWithThe40mmRocketLauncher() is not intrinsic to a person, but perhaps to the interface MilitaryService that can either extend or aggregate the Person object. Therefore it should not be a part of the Person object.
In general, the goal is to avoid adding methods to the base object ("Person") just because it's the easiest way out, as you introduce exceptions to normal Person behavior.
Basically, if you see yourself adding stuff like "hasServedInMilitary" to your base object, you are in trouble. Next you will be doing loads of statements such as if (p.hasServedInMilitary()) blablabla. This is really logically the same as doing instanceOf() checks all the time, and indicates that Person and "Person who has seen military service" are really two different things, and should be disconnected somehow.
Taking a step back, OOP is about reducing the number of if and switch statements, and instead letting the various objects handle things as per their specific implementations of abstract methods/interfaces. Separating the Data and Behavior promotes this, but there's no reason to take it to extremes and seperate all data from all behavior.

The approach you have described is consistent with the strategy pattern. It facilitates the following design principles:
The open/closed principle
Classes should be open for extension but closed for modification
Composition over Inheritance
Behaviours are defined as separate interfaces and specific classes that implement these interfaces. This allows better decoupling between the behaviour and the class that uses the behaviour. The behaviour can be changed without breaking the classes that use it, and the classes can switch between behaviours by changing the specific implementation used without requiring any significant code changes.

The answer is really that it's good in the right situation. As a developer part of your job is to determine the best solution for the problems presented and try to position the solution to be able to accommodate future needs.
I don't do this often follow this pattern but if the compiler or environment are designed specifically to support the separation of data and behavior there are many optimizations that can be achieved in how the Platform handles and organizes your scripts.
It’s in your best interest to be familiarize yourself with as many Design Patterns as possible rather than custom building your entire solution every time and don’t be too judgmental because the pattern doesn’t immediately make sense. You can often use existing design patterns to achieve flexible and robust solutions throughout your code. Just remember they are all meant as a starting point so you should always be prepared to customize to accommodate the individual scenarios you encounter.

Related

How to better reduce classes dependencies so it is easier to Unit-Test them

I am developing a Genetic Algorithm framework and initially decided on the following IIndividual definition:
public interface IIndividual : ICloneable
{
int NumberOfGenes { get; }
double Fitness { get; }
IGene GetGeneAt(int index);
void AddGene(IGene gene);
void SetGeneAt(int index, IGene gene);
void Mutate();
IIndividual CrossOverWith(IIndividual individual);
void CalculateFitness();
string ToString();
}
It looked alright, but as soon as I developed other classes that used IIndividual, I came to the conclusion that making Unit-Tests for those classes would be kind of painful. To understand why, I'll show you the dependency graph of IIndividual:
So, when using IIndividual, I end up to also having to create/manage instances of IGene and IInterval.
I can easily solve the issue, redefining my IIndividual interface to the following:
public interface IIndividual : ICloneable
{
int NumberOfGenes { get; }
void AddGene(double value, double minValue, double maxValue);
void SetGeneAt(int index, double value);
double GetGeneMinimumValue(int index);
double GetGeneMaximumValue(int index);
double Fitness { get; }
double GetGeneAt(int index);
void Mutate();
IIndividual CrossOverWith(IIndividual individual);
void CalculateFitness();
string ToString();
}
with the following dependency graph:
This will be pretty easy to test, at the expense of some performance degradation (I'm at this moment not that worried about that) and having IIndividual heavier (more methods). There is also a big problem for the clients of IIndividual, as if they want to add a Gene, they'll have to add all the little parameters of Gene "manually" in AddGene(value, minimumValue, maximumValue), instead of AddGene(gene)!
My question is:
What design do you prefer and why? Also, what is the criteria to know where to stop?
I could do just the same thing I did to IIndividual to IGene, so anyone that uses IGene doesn't have to know about Interval.
I have a class Population that will serve as a collection of IIndividuals. What stops me from doing the same I did to IIndividual to Population? There must be some kind of boundary, some kind of criteria to understand in which cases is best to just let it be (have some dependencies) and in which cases it is best to hide them (like in the second IIndividual implementation).
Also, when implementing a framework that's supposed to be used by other people, I feel like the second design is less pretty (and is maybe harder for others to understand).
Thanks!
#andersoj has you on the right track regard Feather's book, but here's some deeper analysis:
Eric Evans Domain Driven Design addresses the type of boundary you're pondering. The free InfoQ summarization is a good place to start as the Evans is too verbose in my humble opinion. What Evans describes as Aggregate Roots are the type of boundary I think you need to consider
Individual looks like an aggregate root, implying Genes and Intervals only exist for, and can only be created by, the Individual class. Assuming then these classes don't need to be used by clients, your refactoring of the Individual class to effectively hide, or remove, the other two makes sense.
On the other hand, if clients do need access to Gene and Interval independently of the individual I suggest considering making them immutable so the state of the Individual cannot be altered implicitly.
In either case, start with characterization tests. That is tests that capture what the code is doing now. Then refactor from there.
(I apologize for not directly answering your question, but I can't help but point you this way...)
I can't suggest the book Working Effectively with Legacy Code (author Michael Feathers) highly enough. It's an outstanding treatment of the challenges of getting code under (unit, functional) test.

If class inherited many times can be slower?

When i try to create good object hierarchy which will help to write less code and avoid to use unnecessary fields ,i feel myself free to create many base classes for good grouping which is usually abstract.
What can be disadvantage of doing it like that ? Many times inherited class can be slower ?
To see many unnecessary abstract classes which hasn't enough good naming can cause confusing when encounter it in intelli-sense(auto-complete) ? What can be other else ?
Many times inherited class can be slower?
There's only one way to answer performance questions: try it both ways, and measure the results. Then you'll know.
What can be disadvantage of doing it like that?
The disadvantage of overly complex object hierarchies are:
1) they are confusing because they represent concepts that are not in the business domain
For example, you might want to have a storage system that can store information about employees, computers and conference rooms. So you have classes StorableObject, Employee, Room, Computer, where Employee, Room and Computer inherit from StorableObject. You mean "StorableObject" to represent something about your implementation of your database. Someone naively reading your code would ask "Why is a person a "storable object?" Surely a Computer is a storable object, and a Room is where it is stored. When you mix up the mechanisms of the shared code with the meaning of the "is a kind of" relationship in the business domain, things get confusing.
2) you only get one "inheritance pivot" in C#; it's a single inheritance language. When you make a choice to use inheritance for one thing, that means you've chosen to NOT use inheritance for something else. If you make a base class Vehicle, and derived classes MilitaryVehicle and CivilianVehicle, then you have just chosen to not have a base class Aircraft, because an aircraft can be either civilian or military.
You've got to choose your inheritance pivot very carefully; you only have one chance to get it right. The more complicated your code sharing mechanism is, the more likely you are to paint yourself into a corner where you're stuck with a bunch of code shared, but cannot use inheritance to represent concepts that you want to model.
There are lots of ways to share code without inheritance. Try to save the inheritance mechanism for things that really need it.
I have just made a very simple practical test (unscientific though) where I created empty classes named A, B, C ... Q, where B inherited from A, C from B and so on to Q inheriting from P.
When attempting to retrieve some metrics on this I created some loops in which I simply created x number of A object, x number of B objects and so on.
These classes where empty and contained only the default constructor.
Based on this I could see that if it took 1 second (scaled) to create an object of type A then it took 7-8 seconds to create an object of type Q.
So the answer must be YES a too deep hierarchy will impact performance. If it is noticable depends on many things though, and how many objects you are creating.
Consider composition over inheritance, but I don't think you'll experience performance issues with this.
Unless you're doing reflection, or something like that where your code has to walk the inheritance tree at runtime, you shouldn't see any speed differences, no matter how many levels of inheritance a class has, or no matter how many classes implement your particular class.
The biggest drawback is going to be making your code unnecessarily brittle.
If class B is implementing/inheriting A just because B is going to need similar fields, you will find yourself in a world of hurt six months later when you decide that they need to behave differently from A to B. To that regard, I'll echo k_b in suggesting you'll want to look at the Composition pattern.

Getting my head around object oriented programming

I am entry level .Net developer and using it to develop web sites. I started with classic asp and last year jumped on the ship with a short C# book.
As I developed I learned more and started to see that coming from classic asp I always used C# like scripting language.
For example in my last project I needed to encode video on the webserver and wrote a code like
public class Encoder
{
Public static bool Encode(string videopath) {
...snip...
return true;
}
}
While searching samples related to my project I’ve seen people doing this
public class Encoder
{
Public static Encode(string videopath) {
EncodedVideo encoded = new EncodedVideo();
...snip...
encoded.EncodedVideoPath = outputFile;
encoded.Success = true;
...snip...
}
}
public class EncodedVideo
{
public string EncodedVideoPath { get; set; }
public bool Success { get; set; }
}
As I understand second example is more object oriented but I don’t see the point of using EncodedVideo object.
Am I doing something wrong? Does it really necessary to use this sort of code in a web app?
someone once explained OO to me as a a soda can.
A Soda can is an object, an object has many properties. And many methods. For example..
SodaCan.Drink();
SodaCan.Crush();
SocaCan.PourSomeForMyHomies();
etc...
The purpose of OO Design is theoretically to write a line of code once, and have abstraction between objects.
This means that Coder.Consume(SodaCan.contents); is relative to your question.
An encoded video is not the same thing as an encoder. An encoder returns an encoded video. and encoded video may use an encoder but they are two seperate objects. because they are two different entities serving different functions, they simply work together.
Much like me consuming a soda can does not mean that I am a soda can.
Neither example is really complete enough to evaluate. The second example seems to be more complex than the first, but without knowing how it will be used it's difficult to tell.
Object Oriented design is at it's best when it allows you to either:
1) Keep related information and/or functions together (instead of using parallel arrays or the like).
Or
2) Take advantage of inheritance and interface implementation.
Your second example MIGHT be keeping the data together better, if it returns the EncodedVideo object AND the success or failure of the method needs to be kept track of after the fact. In this case you would be replacing a combination of a boolean "success" variable and a path with a single object, clearly documenting the relation of the two pieces of data.
Another possibility not touched on by either example is using inheritance to better organize the encoding process. You could have a single base class that handles the "grunt work" of opening the file, copying the data, etc. and then inherit from that class for each different type of encoding you need to perform. In this case much of your code can be written directly against the base class, without needing to worry about what kind of encoding is actually being performed.
Actually the first looks better to me, but shouldn't return anything (or return an encoded video object).
Usually we assume methods complete successfully without exceptional errors - if exceptional errors are encountered, we throw an exception.
Object oriented programming is fundamentally about organization. You can program in an OO way even without an OO language like C#. By grouping related functions and data together, it is easier to deal with increasingly complex projects.
You aren't necessarily doing something wrong. The question of what paradigm works best is highly debatable and isn't likely to have a clear winner as there are so many different ways to measure "good" code,e.g. maintainable, scalable, performance, re-usable, modular, etc.
It isn't necessary, but it can be useful in some cases. Take a look at various MVC examples to see OO code. Generally, OO code has the advantage of being re-usable so that what was written for one application can be used for others over and over again. For example, look at log4net for example of a logging framework that many people use.
The way your structure an OO program--which objects you use and how you arrange them--really depends on many factors: the age of the project, the overall size of the project, complexity of the problem, and a bit for just personal taste.
The best advice I can think of that will wrap all the reasons for OO into one quick lesson is something I picked up learning design patterns: "Encapsulate the parts that change." The value of OO is to reuse elements that will be repeated without writing additional code. But obviously you only care to "wrap up" code into objects if it will actually be reused or modified in the future, thus you should figure out what is likely to change and make objects out of it.
In your example, the reason to use the second set up may be that you can reuse the EncodedVideo object else where in the program. Anytime you need to deal with EncodedVideo, you don't concern yourself with the "how do I encode and use video", you just use the object you have and trust it to handle the logic. It may also be valuable to encapsulate the encoding logic if it's complex, and likely to change. Then you isolate changes to just one place in the code, rather than many potential places where you might have used the object.
(Brief aside: The particular example you posted isn't valid C# code. In the second example, the static method has no return type, though I assume you meant to have it return the EncodedVideo object.)
This is a design question, so answer depends on what you need, meaning there's no right or wrong answer. First method is more simple, but in second case you incapsulate encoding logic in EncodedVideo class and you can easily change the logic (based on incoming video type, for instance) in your Encoder class.
I think the first example seems more simple, except I would avoid using statics whenever possible to increase testability.
public class Encoder
{
private string videoPath;
public Encoder(string videoPath) {
this.videoPath = videoPath;
}
public bool Encode() {
...snip...
return true;
}
}
Is OOP necessary? No.
Is OOP a good idea? Yes.
You're not necessarily doing something wrong. Maybe there's a better way, maybe not.
OOP, in general, promotes modularity, extensibility, and ease of maintenance. This goes for web applications, too.
In your specific Encoder/EncodedVideo example, I don't know if it makes sense to use two discrete objects to accomplish this task, because it depends on a lot of things.
For example, is the data stored in EncodedVideo only ever used within the Encode() method? Then it might not make sense to use a separate object.
However, if other parts of the application need to know some of the information that's in EncodedVideo, such as the path or whether the status is successful, then it's good to have an EncodedVideo object that can be passed around in the rest of the application. In this case, Encode() could return an object of type EncodedVideo rather than a bool, making that data available to the rest of your app.
Unless you want to reuse the EncodedVideo class for something else, then (from what code you've given) I think your method is perfectly acceptable for this task. Unless there's unrelated functionality in EncodedVideo and the Encoder classes or it forms a massive lump of code that should be split down, then you're not really lowering the cohesion of your classes, which is fine. Assuming you don't need to reuse EncodedVideo and the classes are cohesive, by splitting them you're probably creating unnecessary classes and increasing coupling.
Remember: 1. the OO philosophy can be quite subjective and there's no single right answer, 2. you can always refactor later :p

Using delegates instead of interfaces for decoupling. Good idea?

When writing GUI apps I use a top level class that "controls" or "coordinates" the application. The top level class would be responsible for coordinating things like initialising network connections, handling application wide UI actions, loading configuration files etc.
At certain stages in the GUI app control is handed off to a different class, for example the main control swaps from the login screen to the data entry screen once the user authenticates. The different classes need to use functionality of objects owned by the top level control. In the past I would simply pass the objects to the subordinate controls or create an interface. Lately I have changed to passing method delegates instead of whole objects with the two main reasons being:
It's a lot easier to mock a method than a class when unit testing,
It makes the code more readable by documenting in the class constructor exactly which methods subordinate classes are using.
Some simplified example code is below:
delegate bool LoginDelegate(string username, string password);
delegate void UpdateDataDelegate(BizData data);
delegate void PrintDataDelegate(BizData data);
class MainScreen {
private MyNetwork m_network;
private MyPrinter m_printer;
private LoginScreen m_loginScreen;
private DataEntryScreen m_dataEntryScreen;
public MainScreen() {
m_network = new Network();
m_printer = new Printer();
m_loginScreen = new LoginScreen(m_network.Login);
m_dataEntryScreen = new DataEntryScreen(m_network.Update, m_printer.Print);
}
}
class LoginScreen {
LoginDelegate Login_External;
public LoginScreen(LoginDelegate login) {
Login_External = login
}
}
class DataEntryScreen {
UpdateDataDelegate UpdateData_External;
PrintDataDelegate PrintData_External;
public DataEntryScreen(UpdateDataDelegate updateData, PrintDataDelegate printData) {
UpdateData_External = updateData;
PrintData_External = printData;
}
}
My question is that while I prefer this approach and it makes good sense to me how is the next developer that comes along going to find it? In sample and open source C# code interfaces are the preferred approach for decoupling whereas this approach of using delegates leans more towards functional programming. Am I likely to get the subsequent developers swearing under their breath for what is to them a counter-intuitive approach?
It's an interesting approach. You may want to pay attention to two things:
Like Philip mentioned, when you have a lot of methods to define, you will end up with a big constructor. This will cause deep coupling between classes. One more or one less delegate will require everyone to modify the signature. You should consider making them public properties and using some DI framework.
Breaking down the implementation to the method level can be too granular sometimes. With class/interface, you can group methods by the domain/functionality. If you replace them with delegates, they can be mixed up and become difficult to read/maintain.
It seems the number of delegates is an important factor here.
While I can certainly see the positive side of using delegates rather than an interface, I have to disagree with both of your bullet points:
"It's a lot easier to mock a method than a class when unit testing". Most mock frameworks for c# are built around the idea of mocking a type. While many can mock methods, the samples and documentation (and focus) are normally around types. Mocking an interface with one method is just as easy or easier to mock than a method.
"It makes the code more readable by documenting in the class constructor exactly which methods subordinate classes are using." Also has it's cons - once a class needs multiple methods, the constructors get large; and once a subordinate class needs a new property or method, rather than just modifying the interface you must also add it to allthe class constructors up the chain.
I'm not saying this is a bad approach by any means - passing functions rather than types does clearly state what you are doing and can reduce your object model complexity. However, in c# your next developer will probably see this as odd or confusing (depending on skill level). Mixing bits of OO and Functional approaches will probably get a raised eyebrow at the very least from most developers you will work with.

Where should I put my first method

I've a need to add method that will calculate a weighted sum of worker salary and his superior salary. I would like something like this:
class CompanyFinanse
{
public decimal WeightedSumOfWorkerSalaryAndSuperior(Worker WorkerA, Worker Superior)
{
return WorkerA.Salary + Superior.Salary * 2;
}
}
Is this a good design or should I put this method somewhere else? I'm just staring designing project and think about a good, Object Oriented way of organize methods in classes. So I would like start from beginning with OOP on my mind. Best practice needed!
I would either put it in the worker class, or have a static function in a finance library. I don't think a Finance object really makes sense, I think it would be more of a set of business rules than anything, so it would be static.
public class Worker {
public Worker Superior {get;set;}
public readonly decimal WeightedSalary {
get {
return (Superior.Salary * 2) + (this.Salary)
}
}
public decimal Salary {get;set;}
}
or
public static class Finance {
public static decimal WeightedSumOfWorkerSalaryAndSuperior(Worker WorkerA, Worker Superior) {
return WorkerA.Salary + Superior.Salary * 2; }
}
For your design to be Object Oriented, you should start by thinking of the purpose of the entire application. If there is only one method in your application (weighted sum), then there isn't too much design to go on.
If this is a finance application, maybe you could have a Salary class which contains a worker's salary and some utility functions.
For the method you pointed out, if the Worker class has a reference to his Superior, you could make this method part of the Worker class.
Without more information on the purpose of the application, it's difficult to give good guidance.
So it may be impossible to give you a complete answer about "best practices" without knowing more about your domain, but I can tell you that you may be setting yourself up for disaster by thinking about the implementation details this early.
If you're like me then you were taught that good OOD/OOP is meticulously detailed and involves BDUF. It wasn't until later in my career that I found out this is the reason so many projects become egregiously unmaintainable later on down the road. Assumptions are made about how the project might work, instead of allowing the design to emerge naturally from how the code is actually going to be used.
Simply stated: You need to being doing BDD / TDD (Behavior/Test Driven Development).
Start with a rough domain model sketched out, but avoid too much detail.
Pick a functional area that you want to get started with. Preferably at the top of the model, or one the user will be interacting with.
Brainstorm on expected functionality that the unit should have and make a list.
Start the TDD cycle on that unit and then refactor aggressively as you go.
What you will end up with is exactly what you do need, and nothing you don't (most of the time). You gain the added benefit of having full test coverage so you can refactor later on down the road without worrying about breaking stuff :)
I know I haven't given you any code here, but that is because anything I give you will probably be wrong, and then you will be stuck with it. Only you know how the code is actually going to be used, and you should start by writing the code in that way. TDD focuses on how the code should look, and then you can fill in the implementation details as you go.
A full explanation of this is beyond the scope of this post, but there are a myriad of resources available online as well as a number of books that are excellent resources for beginning the practice of TDD. These two guys should get you off to a good start.
Martin Fowler
Kent Beck
Following up on the answer by brien, I suggest looking at the practice of CRC cards (Class-Responsibility-Collaboration). There are many sources of information, including:
this tutorial from Cal Poly,
this orientation on the Agile Modeling web site, and
The CRC Card Book, which discusses the practice and its use with multiple languages.
Understanding which class should "own" a particular behavior (and/or which classes should collaborate in implementing a given use case), is in general a top-down kind of discussion driven by the overall design of what your system is doing for its users.
It is easy to find out whether your code needs improvement. There is a code smell in your snippet. You should address that.
It is good that you have very declarative name for the method. But it is too long. It sounds like if you keep that method in this Finanse class it is inevitable that you have to use all those words in the method name to get the sense of what that method is intended to do.
It basically means that this method may not belong to this class.
One way to address this code smell is to see if you could get a shorter method name if we have the method on other class. I see you have Worker and Salary classes.
Assuming those are the only classes left and you don't want to add up more classes, I would put this on Salary. Salary knows how to calculate weighted salary given another salary (Superior salary in this case) as input. You don't need more than two words for the method name now.
#Shawn's answer is one variation of addressing this code smell. (I think you can call it as 'long method name' code smell)

Categories

Resources