This pattern pops up a lot. It looks like a very verbose way to move what would otherwise be separate named methods into a single method and then distinguished by a parameter.
Is there any good reason to have this pattern over just having two methods Method1() and Method2() ? The real kicker is that this pattern tends to be invoked only with constants at runtime-- i.e. the arguments are all known before compiling is done.
public enum Commands
{
Method1,
Method2
}
public void ClientCode()
{
//Always invoked with constants! Never user input.
RunCommands(Commands.Method1);
RunCommands(Commands.Method2);
}
public void RunCommands(Commands currentCommand)
{
switch (currentCommand)
{
case Commands.Method1:
// Stuff happens
break;
case Commands.Method2:
// Other stuff happens
break;
default:
throw new ArgumentOutOfRangeException("currentCommand");
}
}
To an OO programmer, this looks horrible.
The switch and enum would need synchronised maintenance and the default case seems like make-work.
The OO programmer would substitute an object with named methods: Then the names like method1 would only appear once in the library. Also all the default cases would be obviated.
Yes, your clients still need to be synchronised with the methods you supply - a static language always insists on method names being known at compile time.
You could argue that this pattern allows you to put shared logging (or other) code for method entry and exit in a single place. But I wouldn't. AOP is a better approach for this sort of thing.
That pattern could be valid if you needed the coupling to be very loose. For example you might have an interface
interface CommandProcessor{
void process(Command c);
}
If you have a method per command then each time you add a new command you would need to add a new method, if you have multiple implementations then you would need to add the method to each Processor. This could be resolved by having some base class, but if the needs diverge you could end up with a very deep class heirarchy as you add new abstraction layers (or you may already be extending another class in with the processor. If it is based on switch's over the constant you can have you default case that handles new cases appropriately by default (exceptions, whatever may be appropriate).
I have used a pattern similar to this in my code with the addition of a factory. The operations started as a small set, but I knew they would be increasing, so I had a mechanism to describe the command and then a factory that produced CommandProcessors. The factory would generate the appropriate processor and then the single method of that processor would accept the command and perform its processing.
That said if your list of command is fairly static and you don't need to worry about how tightly things are coupled then the one-method-per-command approach certainly lends itself to much more readable code.
I can't see any obvious advantages. Quite the opposite; by splitting the blocks into separate methods, each method will be smaller, easier to read and easier to test.
If needed, you could still have the same "entry point" method, where each case would just branch out and call another method. Whether that would be a good or bad idea is impossible to say without knowing more about specific cases. Either way, I would definitely avoid implementing the code for each case in the RunCommands method.
If RunCommands is only ever invoked with the names constants, then I don't see any advantage in this pattern at all.
The only advantage I see (and it could be a big one) would be that the decision between Method1 and Method2 and the code that actually executes the choice could be entirely unrelated. Of course that advantage is lost, when only constants are ever used to invoke RunCommand.
if the code being run inside each case block is completely separate, no value added. however, if there is any common code to be executed before or after the parameter-specific code, this allows it to not be repeated.
still not really the best pattern, though. each separate method could just have calls to helper methods to handle the common code. and if there needs to be another call, but this one doesn't need the common code in front or at the end, the whole model is broken (or you surround that code with and IF). at this point, all value is lost.
so, really, the answer is no.
Related
I am a student and I am currently preparing for my OOP Basics Exam.
When in the controller you have methods which return a value and such that are void - how do you invoke them without using a if-else statement?
In my code "status" is the only one which should return a string to be printed on the Console - the others are void. So I put a if-esle and 2 methods in the CommandHandler.
Since I know "if-else" is a code smell, is there a more High Quality approach to deal with the situation?
if (commandName == "status")
{
this.Writer.WriteLine(this.CommandHandler.ExecuteStatusCommand(commandName));
}
else
{
this.CommandHandler.ExecuteCommand(commandName, commandParameters);
}
This is the project.
Thank you very much.
First, don't worry about if/else. If anybody tells you if/else is a code smell, put it through the Translator: What comes out is he's telling you he's too crazy, clueless, and/or fanatical to be taken seriously.
If by ill chance you get an instructor who requires you to say the Earth is flat to get an A, sure, tell him the Earth is flat. But if you're planning on a career or even a hobby as a navigator, don't ever forget that it's actually round.
So. It sounds to me like CommandHandler.ExecuteStatusCommand() executes the named command, which is implemented as a method somewhere. If the command method is void, ExecuteStatusCommand() returns null. Otherwise, the command method may return a string, in which case you want to write it to what looks like a stream.
OK, so one approach here is to say "A command is implemented via a method that takes a parameter and returns either null or a string representing a status. If it returns anything but null, write that to the stream".
This is standard stuff: You're defining a "contract". It's not at all inappropriate for command methods which actually return nothing to have a String return type, because they're fulfilling the terms of contract. "Return a string" is an option that's open to all commands; some take advantage, some don't.
This allows knowledge of the command's internals to be limited to the command method itself, which is a huge advantage. You don't need to worry about special cases at the point where you call the methods. The code below doesn't need to know which commands return a status and which don't. The commands themselves are given a means to communicate that information back to the caller, so only they need to know. It's incredibly beneficial to have a design which allows different parts of your code not to care about the details of other parts. Clean "interfaces" like this make that possible. The calling code gets simpler and stays simpler. Less code, with less need to change it over time, means less effort and fewer bugs.
As you noted, if you've got a "status" command that prints a result, and then later on you add a "print" command that also prints a result, you've got to not only implement the print command itself, but you've also got to remember to return to this part of your code and add a special case branch to the if/else.
That kind of tedious error-prone PITA is exactly the kind of nonsense OOP is meant to eliminate. If a new feature can be added without making a single edit to existing code, that's a sort of Platonic ideal of OOP.
So if ExecuteCommand() returns void, we'll want to be calling ExecuteStatusCommand() instead. I'm guessing at some things here. It would have been helpful if you had sketched out the semantics of those two methods.
var result = this.CommandHandler.ExecuteCommand(commandName, commandParameters);
if (result != null)
{
this.Writer.WriteLine(result);
}
If my assumptions about your design are accurate, that's the whole deal. commandParameters, like the status result, are an optional part of the contract. There's nothing inherently wrong with if/else, but sometimes you don't need one.
As the question shows,
As we are using string functions like IsNullOrEmpty or IsNullOrWhiteSpace as the name of functions shows , these are doing more than one job , is it not a violation of SRP?
rather should it not be string.isValid(Enum typeofValidation) than using strategey pattern to choose the correct strategey to validate.
or is it perfectly OK to violate SRP in utilities class or static classes.
The SRP says that a function or class should have only one reason to change. What is a reason to change? A reason to change is a user who requests changes. So a class or function should have only one user who requests changes.
Now a function that does some calculations and then some formatting, has two different users that could request a change. One would request changes to the calculations and the other would request changes to the formatting. Since these users have different needs and will make their requests and different times, we'd like them to be served by different functions.
IsNullOrEmpty(String) is not likely to be serving two different users. The user who cares about null is likely the same user who cares about empty, so isNullOrEmpty does not violate the SRP.
In object-oriented programming, the single responsibility principle states that every object should have a single responsibility
You're describing methods: IsNullOrEmpty or IsNullOrWhiteSpace, which are also self-describing in what they do, they're not objects. string has a single responsibility - to be responsible for text strings!
Static helpers can perform many tasks if you choose: the whole point of the Single Responsibility principle is to ultimately make your code more maintainable and readable for future teams and yourself. As a comment says, don't overthink it. You're not designing the framework here but just consuming some parts of it that will clean your strings for you, and validate incoming data.
The SRP applies to classes, not methods. Still, it's a good idea to have methods that do one thing only. But you can't take that to extremes. For example, a console application would be fairly useless if its Main method could contain only one statement (and, if the statement is a method call, that method could also contain only one statement, etc., recursively).
Think about the implementation of IsNullOrEmpty:
static bool IsNullOrEmpty(string s)
{
return ReferenceEquals(s, null) || Equals(s, string.Empty);
}
So, yes, it's doing two things, but they're done in a single expression. If you go to the level of expressions, any boolean expression involving binary boolean operators could be said to be "doing more than one thing" because it is evaluating the truth of more than one condition.
If the names of the methods bother you because they imply too much activity for a single method, wrap them in your own methods with names that imply the evaluation of a single condition. For example:
static bool HasNoVisibleCharacters(string s) { return string.IsNullOrWhitespace(s); }
static bool HasNoCharacters(string s) { return string.IsNullOrEmpty(s); }
In response to your comment:
say I wrote the function like SerilizeAndValidate(ObjectToSerilizeAndValidate) , clearly this method / class , is doing 2 things , Serialize and Validation, clearly a violation , some time methods in a class leads to maintenance nightmare like above example of serialize and validation
Yes, you are right to be concerned about this, but again, you cannot literally have methods that do one thing only. Remember that different methods will deal with different levels of abstraction. You might have a very high-level method that calls SerializeAndValidate as part of a long sequence of actions. At that level of abstraction, it might be very reasonable to think of SerializeAndValidate as a single action.
Imagine writing a set of step-by-step instructions for an experienced user to open a file's "properties" dialogue:
Right-click the file
Choose "Properties"
Now imagine writing the same instructions for someone who's never used a mouse before:
Position the mouse pointer over the file's icon
Press and release the right mouse button
A menu appears. Position the mouse pointer over the word "Properties"
Press and release the left mouse button
When we write computer programs, we need to operate at both levels of abstraction. Or, rather, at any given time, we're operating at one level of abstraction or another, so as not to confuse ourselves. Furthermore, we rely on library code that operates at lower levels of abstraction still.
Methods also allow you to comply with the "do not repeat yourself" principle (often known as "DRY"). If you need to both serialize and validate objects in many parts of your application, you'd want to have a SerializeAndValidate method to reduce duplicative code. You'd be very well advised to implement the method as a simple convenience method:
void SerializeAndValidate(SomeClass obj)
{
Serialize(obj);
Validate(obj);
}
This allows you the convenience of calling one method, while preserving the separation of serialization logic from validation logic, which should make the program easier to maintain.
I don't see this as doing more than one thing. It is just making sure your string passes a required condition.
I came across some code recently that replaces the use of switches by hard-coding a
Dictionary<string (or whatever we would've been switching on), Func<...>>
and where ever the switch would've been, it instead does dict["value"].Invoke(...).
The code feels wrong in some way, but at the same time, the methods do look a bit cleaner, especially when there's many possible cases. I can't give any rationale as to why this is good or bad design so I was hoping someone could give some reasons to support/condemn this kind of code. Is there a gain in performance? Loss of clarity?
Example:
public class A {
...
public int SomeMethod(string arg){
...
switch(arg) {
case "a": do stuff; break;
case "b": do other stuff; break;
etc.
}
...
}
...
}
becomes
public class A {
Dictionary<string, Func<int>> funcs = new Dictionary<string, Func<int>> {
{ "a", () => 0; },
{ "b", () => DoOtherStuff(); }
... etc.
};
public int SomeMethod(string arg){
...
funcs[arg].Invoke();
...
}
...
}
Advantages:
You can change the behaviour at runtime of the "switch" at runtime
it doesn't clutter the methods using it
you can have non-literal cases (ie. case a + b == 3) with much less hassle
Disadvantages:
All of your methods must have the same signature.
You have a change of scope, you can't use variables defined in the scope of the method unless you capture them in the lambda, you'll have to take care of redefining all lambdas should you add a variable at some point
you'll have to deal with non-existant indexes specifically (similar to default in a switch)
the stacktrace will be more complicated if an unhandled exception should bubble up, resulting in a harder to debug application
Should you use it? It really depends. You'll have to define the dictionary at some place, so the code will be cluttered by it somewhere. You'll have to decide for yourself. If you need to switch behaviour at runtime, the dictionary solution really sticks out, especially, if the methods you use don't have sideeffects (ie. don't need access to scoped variables).
For several reasons:
Because doing it this way allows you to select what each case branch will do at runtime. Otherwise, you have to compile it in.
What's more, you can also change the number of branches at runtime.
The code looks much cleaner especially with a large number of branches, as you mention.
Why does this solution feel wrong to you? If the dictionary is populated at compile time, then you certainly don't lose any safety (the delegates that go in certainly have to compile without error). You do lose a little performance, but:
In most cases the performance loss is a non-issue
The flexibility you gain is enormous
Jon has a couple good answers. Here are some more:
Whenever you need a new case in a switch, you have to code it in to that switch statement. That requires opening up that class (which previously worked just fine), adding the new code, and re-compiling and re-testing that class and any class that used it. This violates a SOLID development rule, the Open-Closed Principle (classes should be closed to modification, but open to extension). By contrast, a Dictionary of delegates allows delegates to be added, removed, and swapped out at will, without changing the code doing the selecting.
Using a Dictionary of delegates allows the code to be performed in a condition to be located anywhere, and thus given to the Dictionary from anywhere. Given this freedom, it's easy to turn the design into a Strategy pattern where each delegate is provided by a unique class that performs the logic for that case. This supports encapsulation of code and the Single Responsibility Principle (a class should do one thing, and should be the only class responsible for that thing).
If there are more number of possible cases then it is good idea to replace Switch Statement with the strategy pattern, See this.
Applying Strategy Pattern Instead of Using Switch Statements
No one has said anything yet about what I believe to be the single biggest drawback of this approach.
It's less maintainable.
I say this for two reasons.
It's syntactically more complex.
It requires more reasoning to understand.
Most programmers know how a switch statement works. Many programmers have never seen a Dictionary of functions.
While this might seem like an interesting and novel alternative to the switch statement and may very well be the only way to solve some problems, it is considerably more complex. If you don't need the added flexibility you shouldn't use it.
Convert your A class to a partial class, and create a second partial class in another file with just the delegate dictionary in it.
Now you can change the number of branches, and add logic to your switch statement without touching the source for the rest of your class.
(Regardless of language) Performance-wise, where such code exists in a critical section, you are almost certainly better off with a function look-up table.
The reason is that you eliminate multiple runtime conditionals (the longer your switch, the more comparisons there will be) in favour of simple array indexing and function call.
The only performance downside is you've introduced the cost of a function call. This will typically be preferable to said conditionals. Profile the difference; YMMV.
I want to design a class, which contains a procedure to achieve a goal.
And it must follow some order to make sure the last method, let's say "ExecuteIt", to behave correctly.
in such a case, what design patter will you use ?
which can make sure that the user must call the public method according some ordering.
If you really don't know what I am saying, then can you share me some concept of choosing a design patter, or what will you consider while design a class?
I believe you are looking for the Template Method pattern.
Template Method is what you want. It is one of the oldest, simply a formalization of a way of composing your classes.
http://en.wikipedia.org/wiki/Template_method_pattern
or as in this code sample:
abstract class AbstractParent // this is the template class
{
// this is the template method that enforces an order of method execution
final void executeIt()
{
doBefore(); // << to be implemented by subclasses
doInTheMiddle() // also to be implemented by subclasses
doLast(); // << the one you want to make sure gets executed last
}
abstract void doBefore();
abstract void doInTheMiddle();
final void doLast(){ .... }
}
class SubA extends AbstractParent
{
void doBefore(){ ... does something ...}
void doInTheMiddle(){ ... does something ...}
}
class SubB extends SubA
{
void doBefore(){ ... does something different ...}
}
But it seems you are fishing for an opportunity to use a pattern as opposed to use a pattern to solve a specific type of problem. That will only lead you to bad software development habits.
Don't think about patterns. Think about how you would go around solving that specific problem without having patterns.
Imagine there were no codified patterns (which is how it was before). How would you accomplish what you want to do here (which is what people did to solve this type of problems.) When you can do that, then you will be in a much better position to understand patterns.
Don't use them as cookie cutters. That is the last thing you want to do.
Its basically not a pattern, but: If you want to make sure, the code/methods are executes in a specific order, make the class having only one public method, which then calls the non-public methods in the right sequence.
The simple and pragmatic approach to enforcing a particular sequence of steps in any API is to define a collection of classes (instead of just one class) in such way that every next valid step takes as a parameter an object derived from the previous step, i.e.:
Fuel coal = CoalMine.getCoal();
Cooker stove = new Cooker (gas);
Filling apple = new AppleFilling();
Pie applePie = new Pie(apple);
applePie.bake(stove);
That is to say that to bake a pie you need to supply a Cooker object that in turn requires some sort of a suitable fuel to be instantiated first. Similarly, before you can get an instanse of a Pie you'd need to get some Filling ready.
In this instance the semantics of the API use are explicitly enforced by its syntax. Keep it simple.
I think you have not to really execute nothing, just prepare the statements, resources and whatever you want.
This way whatever would be the order the user invokes the methods the actual execution would be assured to be ordered; simply because you have the total control over the real execution, just before execute it.
IMHO Template Method as very little to do with your goal.
EDIT:
to be more clear. Make your class to have one public method Execute, and a number of other public methods to tell your class what to do (when to do it is a responsibility of you and not of the user); then make a number of private methods doing the real job, they will be invoked in the right order by your Execute, once the user has finished settings things.
Give the user the ability of setting, keep execution for your self. He tells what, you decide how.
Template Method is rational, if you have a class hierarchy and base class defines protected operation steps in its public template method. Could you elaborate your question?
As general concept you should choose a pattern as a standard solution to a standard problem so, I agree with Oded, the "Template Method" seems to fit your needs (but what you explained is too few maybe).
DonĀ“t use pattern as "fetish", what you have to keep in mind is:
How can I figure my problem in a standard way?
There is a pattern for this?
Is this the simplest way?
I am refactoring a class so that the code is testable (using NUnit and RhinoMocks as testing and isolations frameworks) and have found that I have found myself with a method is dependent on another (i.e. it depends on something which is created by that other method). Something like the following:
public class Impersonator
{
private ImpersonationContext _context;
public void Impersonate()
{
...
_context = GetContext();
...
}
public void UndoImpersonation()
{
if (_context != null)
_someDepend.Undo();
}
}
Which means that to test UndoImpersonation, I need to set it up by calling Impersonate (Impersonate already has several unit tests to verify its behaviour). This smells bad to me but in some sense it makes sense from the point of view of the code that calls into this class:
public void ExerciseClassToTest(Impersonator c)
{
try
{
if (NeedImpersonation())
{
c.Impersonate();
}
...
}
finally
{
c.UndoImpersonation();
}
}
I wouldn't have worked this out if I didn't try to write a unit test for UndoImpersonation and found myself having to set up the test by calling the other public method. So, is this a bad smell and if so how can I work around it?
Code smell has got to be one of the most vague terms I have ever encountered in the programming world. For a group of people that pride themselves on engineering principles, it ranks right up there in terms of unmeasurable rubbish, and about as useless a measure, as LOCs per day for programmer efficiency.
Anyway, that's my rant, thanks for listening :-)
To answer your specific question, I don't believe this is a problem. If you test something that has pre-conditions, you need to ensure the pre-conditions have been set up first for the given test case.
One of the tests should be what happens when you call it without first setting up the pre-conditions - it should either fail gracefully or set up it's own pre-condition if the caller hasn't bothered to do so.
Well, there is a bit too little context to tell, it looks like _someDepend should be initalized in the constructor.
Initializing fields in an instance method is a big NO for me. A class should be fully usable (i.e. all methods work) as soon as it is constructed; so the constructor(s) should initialize all instance variables. See e.g. the page on single step construction in Ward Cunningham's wiki.
The reason initializing fields in an instance method is bad is mainly that it imposes an implicit ordering on how you can call methods. In your case, TheMethodIWantToTest will do different things depending on whether DoStuff was called first. This is generally not something a user of your class would expect, so it's bad :-(.
That said, sometimes this kind of coupling may be unavoidable (e.g. if one method acquires a resource such as a file handle, and another method is needed to release it). But even that should be handled within one method if possible.
What applies to your case is hard to tell without more context.
Provided you don't consider mutable objects a code smell by themselves, having to put an object into the state needed for a test is simply part of the set-up for that test.
This is often unavoidable, for instance when working with remote connections - you have to call Open() before you can call Close(), and you don't want Open() to automatically happen in the constructor.
However you want to be very careful when doing this that the pattern is something readily understood - for instance I think most users accept this kind of behaviour for anything transactional, but might be surprised when they encounter DoStuff() and TheMethodIWantToTest() (whatever they're really called).
It's normally best practice to have a property that represents the current state - again look at remote or DB connections for an example of a consistently understood design.
The big no-no is for this to ever happen for properties. Properties should never care what order they are called in. If you have a simple value that does depend on the order of methods then it should be a parameterless method instead of a property-get.
Yes, I think there is a code smell in this case. Not because of dependencies between methods, but because of the vague identity of the object. Rather than having an Impersonator which can be in different persona states, why not have an immutable Persona?
If you need a different Persona, just create a new one rather than changing the state of an existing object. If you need to do some cleanup afterwards, make Persona disposable. You can keep the Impersonator class as a factory:
using (var persona = impersonator.createPersona(...))
{
// do something with the persona
}
To answer the title: having methods call each other (chaining) is unavoidable in object oriented programming, so in my view there is nothing wrong with testing a method that calls another. A unit test can be a class after all, it's a "unit" you're testing.
The level of chaining depends on the design of your object - you can either fork or cascade.
Forking:
classToTest1.SomeDependency.DoSomething()
Cascading:
classToTest1.DoSomething() (which internally would call SomeDependency.DoSomething)
But as others have mentioned, definitely keep your state initialisation in the constructor which from what I can tell, will probably solve your issue.