I am trying to learn the Single Responsibility Principle (SRP) but it is being quite difficult as I am having a huge difficult to figure out when and what I should remove from one class and where I should put/organize it.
I was googling around for some materials and code examples, but most materials I found, instead of making it easier to understand, made it hard to understand.
For example if I have a list of Users and from that List I have a
class Called Control that does lots of things like Send a greeting and
goodbye message when a user comes in/out, verify weather the user
should be able to enter or not and kick him, receive user commands and messages, etc.
From the example you don't need much to understand I am already doing too much into one class but yet I am not clear enough on how to split and reorganize it afterwards.
If I understand the SRP, I would have a class for joining the channel, for the greeting and goodbye, a class for user verification, a class for reading the commands, right ?
But where and how would I use the kick for example ?
I have the verification class so I am sure I would have all sort of user verification in there including weather or not a user should be kicked.
So the kick function would be inside the channel join class and be called if the verification fails ?
For example:
public void UserJoin(User user)
{
if (verify.CanJoin(user))
{
messages.Greeting(user);
}
else
{
this.kick(user);
}
}
Would appreciate if you guys could lend me a hand here with easy to understand C# materials that are online and free or by showing me how I would be splitting the quoted example and if possible some sample codes, advice, etc.
Let’s start with what does Single Responsibility Principle (SRP) actually mean:
A class should have only one reason to change.
This effectively means every object (class) should have a single responsibility, if a class has more than one responsibility these responsibilities become coupled and cannot be executed independently, i.e. changes in one can affect or even break the other in a particular implementation.
A definite must read for this is the source itself (pdf chapter from "Agile Software Development, Principles, Patterns, and Practices"): The Single Responsibility Principle
Having said that, you should design your classes so they ideally only do one thing and do one thing well.
First think about what “entities” you have, in your example I can see User and Channel and the medium between them through which they communicate (“messages"). These entities have certain relationships with each other:
A user has a number of channels that he has joined
A channel has a number of users
This also leads naturally do the following list of functionalities:
A user can request to join a channel.
A user can send a message to a channel he has joined
A user can leave a channel
A channel can deny or allow a user’s request to join
A channel can kick a user
A channel can broadcast a message to all users in the channel
A channel can send a greeting message to individual users in the
channel
SRP is an important concept but should hardly stand by itself – equally important for your design is the Dependency Inversion Principle (DIP). To incorporate that into the design remember that your particular implementations of the User, Message and Channel entities should depend on an abstraction or interface rather than a particular concrete implementation. For this reason we start with designing interfaces not concrete classes:
public interface ICredentials {}
public interface IMessage
{
//properties
string Text {get;set;}
DateTime TimeStamp { get; set; }
IChannel Channel { get; set; }
}
public interface IChannel
{
//properties
ReadOnlyCollection<IUser> Users {get;}
ReadOnlyCollection<IMessage> MessageHistory { get; }
//abilities
bool Add(IUser user);
void Remove(IUser user);
void BroadcastMessage(IMessage message);
void UnicastMessage(IMessage message);
}
public interface IUser
{
string Name {get;}
ICredentials Credentials { get; }
bool Add(IChannel channel);
void Remove(IChannel channel);
void ReceiveMessage(IMessage message);
void SendMessage(IMessage message);
}
What this list doesn’t tell us is for what reason these functionalities are executed. We are better off putting the responsibility of “why” (user management and control) in a separate entity – this way the User and Channel entities do not have to change should the “why” change. We can leverage the strategy pattern and DI here and can have any concrete implementation of IChannel depend on a IUserControl entity that gives us the "why".
public interface IUserControl
{
bool ShouldUserBeKicked(IUser user, IChannel channel);
bool MayUserJoin(IUser user, IChannel channel);
}
public class Channel : IChannel
{
private IUserControl _userControl;
public Channel(IUserControl userControl)
{
_userControl = userControl;
}
public bool Add(IUser user)
{
if (!_userControl.MayUserJoin(user, this))
return false;
//..
}
//..
}
You see that in the above design SRP is not even close to perfect, i.e. an IChannel is still dependent on the abstractions IUser and IMessage.
In the end one should strive for a flexible, loosely coupled design but there are always tradeoffs to be made and grey areas also depending on where you expect your application to change.
SRP taken to the extreme in my opinion leads to very flexible but also fragmented and complex code that might not be as readily understandable as simpler but somewhat more tightly coupled code.
In fact if two responsibilities are always expected to change at the same time you arguably should not separate them into different classes as this would lead, to quote Martin, to a "smell of Needless Complexity". The same is the case for responsibilities that never change - the behavior is invariant, and there is no need to split it.
The main idea here is that you should make a judgment call where you see responsibilities/behavior possibly change independently in the future, which behavior is co-dependent on each other and will always change at the same time ("tied at the hip") and which behavior will never change in the first place.
I had a very easy time learning this principle. It was presented to me in three small, bite-sized parts:
Do one thing
Do that thing only
Do that thing well
Code that fulfills those criteria fulfills the Single-Responsibility Principle.
In your above code,
public void UserJoin(User user)
{
if (verify.CanJoin(user))
{
messages.Greeting(user);
}
else
{
this.kick(user);
}
}
UserJoin does not fulfill the SRP; it is doing two things namely, Greeting the user if they can join, or rejecting them if they cannot. It might be better to reorganize the method:
public void UserJoin(User user)
{
user.CanJoin
? GreetUser(user)
: RejectUser(user);
}
public void Greetuser(User user)
{
messages.Greeting(user);
}
public void RejectUser(User user)
{
messages.Reject(user);
this.kick(user);
}
Functionally, this is no different from the code originally posted. However, this code is more maintainable; what if a new business rule came down that, because of recent cybersecurity attacks, you want to record the rejected user's IP address? You would simply modify method RejectUser. What if you wanted to show additional messages upon user login? Just update method GreetUser.
SRP in my experience makes for maintainable code. And maintainable code tends to go a long ways toward fulfilling the other parts of SOLID.
My recommendation is to start with the basics: what things do you have? You mentioned multiple things like Message, User, Channel, etc. Beyond the simple things, you also have behaviors that belong to your things. A few examples of behaviors:
a message can be sent
a channel can accept a user (or you might say a user can join a channel)
a channel can kick a user
and so on...
Note that this is just one way to look at it. You can abstract any one of these behaviors until abstraction means nothing and everything! But, a layer of abstraction usually doesn't hurt.
From here, there are two common schools of thought in OOP: complete encapsulation and single responsibility. The former would lead you to encapsulate all related behavior within its owning object (resulting in inflexible design), whereas the latter would advise against it (resulting in loose coupling and flexibility).
I would go on but it's late and I need to get some sleep... I'm making this a community post, so someone can finish what I've started and improve what I've got so far...
Happy learning!
Related
I am writing an application in C#. Now i am thinking over and over again about its design. Have already changed my mind 3 or 4 times but thankfully for the good.
After few iterations i come up with a solution but i am still wondering what is the best way to achieve that with C#.
Basically i will have a class lets call it MessageManager, and after each action different classes will send a message to MessageManager and MessageManager will send the message depending on the response. Then i will have another manager call it UIManager it will perform all the UI switching or inform the MessageManager in case of any core/helper operation is required.
Now the thing is messages could make up to like 50-60 types each will have different type of arguments. And i want to design it in a way if i have new messages in future it can accommodate that as well.
What is the best way to accomplish that in C# like what will be the best for such case delegates, events. Flexibility is the most important thing.
I believe that combining the Observer pattern (publish/subscribe logic) along side with the Mediator one can be a good solution to your problem. Your Mediator class will act as an Event Manager (most of your classes will depend on it as a mediator rather than depending on each others) :
public class MessageManager{
private Dictionary<string,List<MessageListener>> listeners;
public void sendMessage(Message m){
//loop over listeners of m
}
public void addMessageListener(MessageListener ml){
//add a listener
}
public void removeMessageListener(MessageListener ml){
//remove a listener
}
}
Message would be the parent interface, having a generic abstraction at this level is very important as it avoids the MessageManager from distinguishing between your 50-60 types of messages and thus becoming a nightmare to maintain. The specificity of depending on a particular sub-type of Message should be moved to a lower level: the direct consumers.
I have a form and a logic class. Based on user actions, the class generates a list of actions. These actions then need to be displayed as buttons on the form, so the user can select from them.
My initial solution was this:
public class Logic {
public List<string> GetActions() {
List<string> result = new List<string>();
// ...prepare list
return result;
}
}
public class FrmGUI : Form {
Logic logic = new Logic();
private void PopulateButtons() {
foreach(string action in logic.GetActions(){
//...create button
}
}
}
The GUI retrieves the list of strings from the Logic class and then uses that to populate a panel with buttons. Now supposedly this is bad OO practise because I'm exposing something about how Logic class behaves. There is an assumption here that the GetActions method will always exist and that the Logic class will always be able to return this list of strings.
Another solution is this:
public class Logic {
public void PopulateButtons(Panel panel, Action<object, EventArgs> eventHandler) {
// ...prepare list
// ...populate buttons
}
}
public class FrmGUI : Form {
Logic logic = new Logic();
private void PopulateButtons() {
logic.PopulateButtons(this.panel1, actionButtonClickHandler);
}
}
Now here the GUI class knows nothing about the logic class and only expects to get the buttons populated. On the other hand, the logic class is now involved in GUI stuff.
What is the correct way to handle such cases. Or is there a third implementation which is better.
I'd use the former patttern: The Logic-layer creates information, and UI-layer uses that information to create the UI.
That way, if you decide to re-skin the UI to use a drop-down list of items you only have to change the UI layer, not the logic.
It means that the UI layer has a minimal dependency on the types/data provided by the logic layer (as long as it doesn't know anything about how the logic is implemented, that is fine), but the logic layer has no idea whatsoever about what the UI implementation is - which is what you want (the lower level components in a system should not know anything about the higher level design, while the higher level components must necessarily have a basic understanding of the low-level components that they utilise).
It would be preferable that the application (or some other external entity) creates both the Logic and UI and links them together, rather than the UI itself creating the Logic - this will help the UI and logic to be much more loosely coupled.
I would recommend placing a layer of abstraction between your Logic and your FrmGUI.
For a simplistic example, let's say you have a login in you application. Define an interface for your logical screen. Note there is no mention here of what controls are used. The Logic classes never knows the UI class/form used.
interface ILoginScreen : IScreen
{
event EventHandler LoginInvoked;
event EventHandler CancelInvoked;
string User { get; set; }
string Password { get; set; }
}
In your LoginLogic class you have code like this:
void Start() // initial LoginLogic method
{
ILoginScreen loginScreen = uiFactory.CreateLoginScreen();
loginScreen.User = string.empty;
loginScreen.Password = string.empty;
loginScreen.LoginInvoked += new EventHandler(LoginScreen_LoginInvoked);
loginScreen.CancelInvoked += new EventHandler(LoginScreen_CancelInvoked);
loginScreen.Show();
}
void LoginScreen_LoginInvoked(s, e)
{
if (ValidateCredentials(loginScreen.User, loginScreen.Password))
{
// goto the next screen logic controller
}
}
In your form, you implement ILoginScreen and refresh the UI fields with data from teh USer and Password properties. Additionally, you raise the required Login and Cancel events based on the user feedback (button click, Escape keystroke, whatever).
While this is a simplistic example, I do a lot of Windows Mobile and Windows CE apps where it is very common to want to run the same application on vastly different form-factors OS variants and this approach lets you literally snap on new GUI form-factors. The heart of that usage is the UIFactory that is dynamically loaded to provide the appropriate UI implementation.
That Logic can report the actions it supports (1st pattern) looks fine to me (but the return type of GetActions really should be IEnumerable<string> instead of a list).
Not so good is that in your sample the form instantiates the Logic class directly. Typically, you'd create an interface or abstract base class for the different types of Logic classes that you might have, and have concrete implementations fill in the functionality. The form would then get the logic to use through some inversion-of-control mechanism.
correct????? Over the years lots of people have invested lots of time in trying standardise this approach and I'm afraid the answer may be deduced from the number of ui design patterns out there!
You may want to look at MVC, MVP, MVVM patterns, all of which are in vogue at the moment.
In general:
it is a good idea to try to split logic from presentation, so you're on the right lines. But remember that one of the consequences of this split is that it is better for your "logic" not to know anything about presentation (since you already have a class responsible for that).
So you might want to think about the concept of "buttons", and think (from your logic point of view), "don't I really mean commands?". They only really become buttons when you think of them in the context of a screen. But, say, a command to load the transactions on a particular bank account....you don't need a screen to conceptualise how this would work.
A good thing I find is to imagine that you're going to develop this app with both a forms front end and, say, a web front end which does exactly the same thing. Obviously these two apps would have a totally different presentation layer because of the fundamentally different technologies involved.
But because you don't want to write code twice you'll have a "logic" layer too, where you'll stuff as much common code as you can. For example, deciding whether a bank account is overdrawn - doesn't matter whether you're web or win, overdrawn is still overdrawn. And conversely, any place where you'd end up writing different code between web and win belongs into your "presentation" layer. For example, displaying an overdrawn balance in red.
Food for thought.
the first one is better, because your interface between GUI and logic is just a list of string.
After, it all depends on the way you're calling actions on your logic class from your button.
If you have a generic method taking the action string, it's fine. If you need to call different methods on your logic class depending on the action string, you'll need a mapping in the GUI class to map action string and method call. you could also import this "action string - mapping method" from your logic class to keep things separated.
My opinion is, it depends on the reason for creating something like a logic tier and a GUI tier. I think the most common reason is to reuse the logic, e.g. to use it for a WPF and a web GUI, or the data has to be processed before sending it to the GUI. Your first example fits the mentioned pattern. In your second example the logic seems not to be reuseable, because its gui specific.
However, in the real world there it right or wrong answer. The architecture should fit your needs and make your project maintainable(e.g. by reduce redundant code).
In your case the question is: How often do you need these functions and where/when do you need them?
Recently we had a discussion regarding Data and Behavior separation in classes. The concept of separation of Data and Behaviour is implemented by placing the Domain Model and its behavior into seperate classes.
However I am not convinced of the supposed benefits of this approach. Even though it might have been coined by a "great" (I think it is Martin Fowler, though I am not sure). I present a simple example here. Suppose I have a Person class containing data for a Person and its methods (behavior).
class Person
{
string Name;
DateTime BirthDate;
//constructor
Person(string Name, DateTime BirthDate)
{
this.Name = Name;
this.BirthDate = BirthDate;
}
int GetAge()
{
return Today - BirthDate; //for illustration only
}
}
Now, separate out the behavior and data into separate classes.
class Person
{
string Name;
DateTime BirthDate;
//constructor
Person(string Name, DateTime BirthDate)
{
this.Name = Name;
this.BirthDate = BirthDate;
}
}
class PersonService
{
Person personObject;
//constructor
PersonService(string Name, DateTime BirthDate)
{
this.personObject = new Person(Name, BirthDate);
}
//overloaded constructor
PersonService(Person personObject)
{
this.personObject = personObject;
}
int GetAge()
{
return personObject.Today - personObject.BirthDate; //for illustration only
}
}
This is supposed to be beneficial and improve flexibility and provide loose coupling. I do not see how. According to me this introduces extra coding and performance penalty, that each time we have to initialize two class objects. And I see more problems in extending this code. Consider what happens when we introduce inheritance in above case. We have to inherit both the classes
class Employee: Person
{
Double Salary;
Employee(string Name, DateTime BirthDate, Double Salary): base(Name, BirthDate)
{
this.Salary = Salary;
}
}
class EmployeeService: PersonService
{
Employee employeeObject;
//constructor
EmployeeService(string Name, DateTime BirthDate, Double Salary)
{
this.employeeObject = new Employee(Name, BirthDate, Salary);
}
//overloaded constructor
EmployeeService(Employee employeeObject)
{
this.employeeObject = employeeObject;
}
}
Note that even if we segregate out the behavior in a seperate class, we still need object of the Data class for the Behaviour class methods to work on. So in the end our Behavior class contains both the data and the behavior albeit we have the data in form of a model object.
You might say that you can add some Interfaces to the mix , so we could have IPersonService and an IEmployeeService. But I think introducing interfaces for each and every class and inherting from interfaces does not seem OK.
So then can you tell me what have I achieved by seperating out the data and behavior in above case that I could not have achieved by having them in the same class ?
I agree, the separation as you implemented is cumbersome. But there are other options. What about an ageCalculator object that has method getAge(person p)? Or person.getAge(IAgeCalculator calc). Or better yet calc.getAge(IAgeble a)
There are several benefits that accrue from separating these concerns. Assuming that you intended for your implementation to return years, what if a person / baby is only 3 months old? Do you return 0? .25? Throw an exception? What if I want the age of a dog? Age in decades or hours? What if I want the age as of a certain date? What if the person is dead? What if I want to use Martian orbit for year? Or Hebrew calander?
None of that should affect classes that consume the person interface but make no use of birthdate or age. By decoupling the age calculation from the data it consumes, you get increased flexibility and increased chance of reuse. (Maybe even calculate age of cheese and person with same code!)
As usually, optimal design will vary greatly with context. It would be a rare situation, however, that performance would influence my decision in this type of problem. Other parts of the system are likely several orders of magnitude greater factors, like the speed of light between browser and server or database retrieval or serialization. time / dollars are better spent refactoring toward simplicity and maintainability than theoretical performance concerns. To that end, I find separating data and behavior of domain models to be helpful. They are, after all, separate concerns, no?
Even with such priorities, thing are muddled. Now the class that wants the persons age has another dependency, the calc class. Ideally, fewer class dependencies are desirable. Also, who is responsible instantiating calc? Do we inject it? Create a calcFactory? Or should it be a static method? How does the decision affect testability? Has the drive toward simplicity actually increased complexity?
There seems to be a disconnect between OO's instance on combining behavior with data and the single responsibility principle. When all else fails, write it both ways and then ask a coworker, "which one is simpler?"
Actually, Martin Fowler says that in the domain model, data and behavior should be combined. Take a look at AnemicDomainModel.
I realize I am about a year late on replying to this but anyway... lol
I have separated the Behaviors out before but not in the way you have shown.
It is when you have Behaviors that should have a common interface yet allow for different (unique) implementation for different objects that separating out the behaviors makes sense.
If I was making a game, for example, some behaviors available for objects might be the ability to walk, fly, jump and so forth.
By defining Interfaces such as IWalkable, IFlyable and IJumpable and then making concrete classes based on these Interfaces it gives you great flexibility and code reuse.
For IWalkable you might have...
CannotWalk : IWalkableBehavior
LimitedWalking : IWalkableBehavior
UnlimitedWalking : IWalkableBehavior
Similar pattern for IFlyableBehavior and IJumpableBehavior.
These concrete classes would implement the behavior for CannotWalk, LimitedWalking and UnlimitedWalking.
In your concrete classes for the objects (such as an enemy) you would have a local instance of these Behaviors. For example:
IWalkableBehavior _walking = new CannotWalk();
Others might use new LimitedWalking() or new UnlimitedWalking();
When the time comes to handle the behavior of an enemy, say the AI finds the player is within a certain range of the enemy (and this could be a behavior as well say IReactsToPlayerProximity) it may then naturally attempt to move the enemy closer to "engage" the enemy.
All that is needed is for the _walking.Walk(int xdist) method to be called and it will automagically be sorted out. If the object is using CannotWalk then nothing will happen because the Walk() method would be defined as simply returning and doing nothing. If using LimitedWalking the enemy may move a very short distance toward the player and if UnlimitedWalking the enemy may move right up to the player.
I might not be explaining this very clearly but basically what I mean is to look at it the opposite way. Instead of encapsulating your object (what you are calling Data here) into the Behavior class encapsulate the Behavior into the object using Interfaces and this gives you the "loose coupling" allowing you to refine the behaviors as well as easily extend each "behavioral base" (Walking, Flying, Jumping, etc) with new implementations yet your objects themselves know no difference. They just have a Walking behavior even if that behavior is defined as CannotWalk.
Funnily enough, OOP is often described as combining data and behavior.
What you're showing here is something I consider an anti-pattern: the "anemic domain model." It does suffer from all the problems you've mentioned, and should be avoided.
Different levels of an application might have a more procedural bent, which lends themselves to a service model like you've shown, but that would usually only be at the very edge of a system. And even so, that would internally be implemented by traditional object design (data + behavior). Usually, this is just a headache.
Age in intrisic to a person (any person). Therefore it should be a part of the Person object.
hasExperienceWithThe40mmRocketLauncher() is not intrinsic to a person, but perhaps to the interface MilitaryService that can either extend or aggregate the Person object. Therefore it should not be a part of the Person object.
In general, the goal is to avoid adding methods to the base object ("Person") just because it's the easiest way out, as you introduce exceptions to normal Person behavior.
Basically, if you see yourself adding stuff like "hasServedInMilitary" to your base object, you are in trouble. Next you will be doing loads of statements such as if (p.hasServedInMilitary()) blablabla. This is really logically the same as doing instanceOf() checks all the time, and indicates that Person and "Person who has seen military service" are really two different things, and should be disconnected somehow.
Taking a step back, OOP is about reducing the number of if and switch statements, and instead letting the various objects handle things as per their specific implementations of abstract methods/interfaces. Separating the Data and Behavior promotes this, but there's no reason to take it to extremes and seperate all data from all behavior.
The approach you have described is consistent with the strategy pattern. It facilitates the following design principles:
The open/closed principle
Classes should be open for extension but closed for modification
Composition over Inheritance
Behaviours are defined as separate interfaces and specific classes that implement these interfaces. This allows better decoupling between the behaviour and the class that uses the behaviour. The behaviour can be changed without breaking the classes that use it, and the classes can switch between behaviours by changing the specific implementation used without requiring any significant code changes.
The answer is really that it's good in the right situation. As a developer part of your job is to determine the best solution for the problems presented and try to position the solution to be able to accommodate future needs.
I don't do this often follow this pattern but if the compiler or environment are designed specifically to support the separation of data and behavior there are many optimizations that can be achieved in how the Platform handles and organizes your scripts.
It’s in your best interest to be familiarize yourself with as many Design Patterns as possible rather than custom building your entire solution every time and don’t be too judgmental because the pattern doesn’t immediately make sense. You can often use existing design patterns to achieve flexible and robust solutions throughout your code. Just remember they are all meant as a starting point so you should always be prepared to customize to accommodate the individual scenarios you encounter.
One of the questions I was asked was that I have a database table with following columns
pid - unique identifier
orderid - varchar(20)
documentid - int
documentpath - varchar(250)
currentLocation - varchar(250)
newlocation - varchar(250)
status - varchar(15)
I have to write a c# app to move the files from currentlocation to newlocation and update status column as either 'SUCCESS' or 'FAILURE'.
This was my answer
Create a List of all the records using linq
Create a command object which would be perform moving files
using foreach, invoke a delegate to move the files -
use endinvoke to capture any exception and update the db accordingly
I was told that command pattern and delegate did not fit the bill here - i was aksed to think and implement a more favorable GoF pattern.
Not sure what they were looking for - In this day and age, do candidates keep a lot of info on head as one always has google to find any answer and come up with solution.
I sort of agree with Aaronaught's comment above. For a problem like this, sometimes you can overthink it and try to do something more than you actually need to do.
That said, the one GoF pattern that came to mind was "Iterator." In your first statement, you said you would read all the records into a List. The one thing that could be problematic with that is if you had millions of these records. You'd probably want to process them in a more successive fashion, rather than reading the entire list into memory. The Iterator pattern would give you the ability to iterate over the list without having to know the underlying (database) storage/retrieval mechanism. The underlying implementation of the iterator could retrieve one, ten, or a hundred records at a time, and dole them out to the business logic upon request. This would provide some testing benefit as well, because you could test your other "business" logic using a different type of underlying storage (e.g. in-memory list), so that your unit tests would be independent from the database.
A deep understanding of patterns is something you should definitely have as a developer - you shouldn't need to go to Google to determine which pattern to "use" because you won't have enough time to really understand that pattern between when you start reading about it and when you apply it.
Patterns are mostly about understanding forces and encapsulating variation. That is, forces create certain kinds of variation and we have well understood ways of encapsulating those kinds of variation. A "pattern" is a body of understanding about which forces lead to which kinds of variation and which methods of encapsulation best address those.
I have a friend who was teaching a course on patterns and it suddenly struck him that he could solve a given problem "using" (meaning "implementing the encapsulating technique of") every pattern in his course book. It really did a great job of helping drive home the fact that finding the right technique is more important that knowing how to apply a technique.
The Command pattern, for instance, starts with an understanding that sometimes we want to vary when something happens. In these cases, we want to decouple the decision of what to do from the decision of when to do it. In this example, I don't see any indication that when your command should be executed varies at all.
In fact, I don't really see anything that varies so there might not have been any patterns in the problem at all. If your interviewers were saying there were, then they may have some learning to do as well.
Anywho... I'd recommend Design Patterns Explained by Shalloway and Trott. You'll get a deeper understanding of what patterns are really for and how they help you do your job and, the next time they tell you that you are "using" the wrong pattern, you might just be in a position to educate them. That seems to go over pretty well for me... about 20% of the time. :)
I would rather say that the interviewer wanted you to use (or mention) the SOLID object oriented design principles here, and in that process you might use some design pattern.
For instance, we could a make a design like below which adheres to SRP, OCP, and DIP.
internal interface IStatusRecordsToMove
{
List<IRecord> Records { get; }
}
internal interface IRecord
{
string Status { get; set; }
}
internal interface IRecordsMover
{
ITargetDb TargetDb { get; }
void Move(IStatusRecordsToMove record);
}
internal interface ITargetDb
{
void SaveAndUpdateStatus(IRecord record);
}
class ProcessTableRecordsToMove : IStatusRecordsToMove
{
public List<IRecord> Records
{
get { throw new NotImplementedException(); }
}
}
internal class ProcessRecordsMoverImpl : IRecordsMover
{
#region IRecordsMover Members
public ITargetDb TargetDb
{
get { throw new NotImplementedException(); }
}
public void Move(IStatusRecordsToMove recordsToMove)
{
foreach (IRecord item in recordsToMove.Records)
{
TargetDb.SaveAndUpdateStatus(item);
}
}
#endregion
}
internal class TargetTableBDb : ITargetDb
{
public void SaveAndUpdateStatus(IRecord record)
{
try
{
//some db object, save new record
record.Status = "Success";
}
catch(ApplicationException)
{
record.Status = "Failed";
}
finally
{
//Update IRecord Status in Db
}
}
}
When writing GUI apps I use a top level class that "controls" or "coordinates" the application. The top level class would be responsible for coordinating things like initialising network connections, handling application wide UI actions, loading configuration files etc.
At certain stages in the GUI app control is handed off to a different class, for example the main control swaps from the login screen to the data entry screen once the user authenticates. The different classes need to use functionality of objects owned by the top level control. In the past I would simply pass the objects to the subordinate controls or create an interface. Lately I have changed to passing method delegates instead of whole objects with the two main reasons being:
It's a lot easier to mock a method than a class when unit testing,
It makes the code more readable by documenting in the class constructor exactly which methods subordinate classes are using.
Some simplified example code is below:
delegate bool LoginDelegate(string username, string password);
delegate void UpdateDataDelegate(BizData data);
delegate void PrintDataDelegate(BizData data);
class MainScreen {
private MyNetwork m_network;
private MyPrinter m_printer;
private LoginScreen m_loginScreen;
private DataEntryScreen m_dataEntryScreen;
public MainScreen() {
m_network = new Network();
m_printer = new Printer();
m_loginScreen = new LoginScreen(m_network.Login);
m_dataEntryScreen = new DataEntryScreen(m_network.Update, m_printer.Print);
}
}
class LoginScreen {
LoginDelegate Login_External;
public LoginScreen(LoginDelegate login) {
Login_External = login
}
}
class DataEntryScreen {
UpdateDataDelegate UpdateData_External;
PrintDataDelegate PrintData_External;
public DataEntryScreen(UpdateDataDelegate updateData, PrintDataDelegate printData) {
UpdateData_External = updateData;
PrintData_External = printData;
}
}
My question is that while I prefer this approach and it makes good sense to me how is the next developer that comes along going to find it? In sample and open source C# code interfaces are the preferred approach for decoupling whereas this approach of using delegates leans more towards functional programming. Am I likely to get the subsequent developers swearing under their breath for what is to them a counter-intuitive approach?
It's an interesting approach. You may want to pay attention to two things:
Like Philip mentioned, when you have a lot of methods to define, you will end up with a big constructor. This will cause deep coupling between classes. One more or one less delegate will require everyone to modify the signature. You should consider making them public properties and using some DI framework.
Breaking down the implementation to the method level can be too granular sometimes. With class/interface, you can group methods by the domain/functionality. If you replace them with delegates, they can be mixed up and become difficult to read/maintain.
It seems the number of delegates is an important factor here.
While I can certainly see the positive side of using delegates rather than an interface, I have to disagree with both of your bullet points:
"It's a lot easier to mock a method than a class when unit testing". Most mock frameworks for c# are built around the idea of mocking a type. While many can mock methods, the samples and documentation (and focus) are normally around types. Mocking an interface with one method is just as easy or easier to mock than a method.
"It makes the code more readable by documenting in the class constructor exactly which methods subordinate classes are using." Also has it's cons - once a class needs multiple methods, the constructors get large; and once a subordinate class needs a new property or method, rather than just modifying the interface you must also add it to allthe class constructors up the chain.
I'm not saying this is a bad approach by any means - passing functions rather than types does clearly state what you are doing and can reduce your object model complexity. However, in c# your next developer will probably see this as odd or confusing (depending on skill level). Mixing bits of OO and Functional approaches will probably get a raised eyebrow at the very least from most developers you will work with.