Currently I am confronted often with code that follows the pattern demonstrated by the code bellow. It is kind of a strategy pattern.
I am not fine with this, it feels somehow smelly. It breaks the actual strategy pattern and the additional indirection is confusing. Also it leads often to methods called similiar like in the example, because the purpose of the method in the derived class is very similar to the in the base class. On the other side I am not able to point with the finger to the problem core.
Am I the onlyone who finds this fishy? And if not, what problems can this code cause, esepecially with regard to SOLID principles?
namespace MyTest
{
abstract class Base
{
public void DoSomething()
{
var param = Prepare();
DoItReally(param);
}
private string Prepare()
{
return "Hallo Welt";
}
protected abstract void DoItReally(string param);
}
class DerivedOne : Base
{
protected override void DoItReally(string param)
{
Console.WriteLine(param);
}
}
class DerivedTwo : Base
{
protected override void DoItReally(string param)
{
Console.WriteLine(param.ToUpper());
}
}
static class Program
{
public static void Main(string[] args)
{
Base strategy = new DerivedOne();
strategy.DoSomething();
strategy = new DerivedTwo();
strategy.DoSomething();
}
}
}
If you separate the code into its two constituent parts, you should see that both of them are perfectly fine.
1. Abstracting some logic to a private method
public void DoSomething()
{
var param = Prepare();
DoItReally(param);
}
private string Prepare()
{
return "Hallo Welt";
}
Given that the method bodies for both methods are fixed, and in this case really short, we can argue about the private method not being necessary here. But this is just an example, and in cases where your initialization logic becomes much more complex (e.g. a complex data query and calculation), the need to abstract this into a private method increases.
2. Subclasses implementing abstract base methods
That's exactly why abstract exists as a keyword.
Your base class know how to fetch the data, but since there are multiple ways of handling this data, there are subclasses which each define one of those options. You've maximized reusability in the base class, the only thing that's not reusable is each subclass' custom handling logic.
I think you're getting caught up on labeling patterns and antipatterns much more than is actually productive.
Clean code doesn't come from the quest to find antipatterns. Clean code comes from understanding your needs, realizing that the code does not suit your purpose or has an unwanted side effect, and then refactoring the code to keep its benefit while losing or minimizing its drawbacks.
As of right now, you have not shown any issue with the code itself, or why it may be causing an unwanted side effect. Your question is hypochondriac in nature, you're over-eagerly looking for issues, rather than simply trying to fix an issue that concretely affects you.
Related
So I have the following class, which takes, in its constructor, three different implementations of the same interface as dependencies:
public class MyTestClass : ISomeInterface<string>
{
// Constructor
public MyTestClass([Named("ImplementationA")] IMyTestInterface implementationA,
[Named("ImplementationB")] IMyTestInterface implementationB,
[Named("ImplementationC")] IMyTestInterface implementationC)
{
// Some logic here
}
public void MethodA(string)
{
}
}
When using this class outside of unit tests, I inject the dependencies in question with Ninject, i.e. I have something like this:
public class MyNinjectModule : NinjectModule
{
public override void Load()
{
this.Bind<IMyTestInterface>().To<ImplementationA>().InRequestScope().Named("ImplementationA");
this.Bind<IMyTestInterface>().To<ImplementationB>().InRequestScope().Named("ImplementationB");
this.Bind<IMyTestInterface>().To<ImplementationC>().InRequestScope().Named("ImplementationC");
}
}
Which works fine, but the problem I am having now is that I want to unit test this class and I want to do it with AutoFixture, which leads me to my question, how do I create an instance of MyTestClass with those three specific implementations of IMyTestInterface, i.e. ImplementationA, ImplementationB, and ImplementationC? If I just do something like this:
private ISomeInterface<string> testInstance;
private Fixture fixture;
[TestInitialize]
public void SetUp()
{
this.fixture = new Fixture();
this.fixture.Customize(new AutoMoqCustomization());
this.testInstance = this.fixture.Create<MyTestClass>();
}
AutoFixture creates an instance of MyTestClass, but with some random implementations of IMyTestInterface, which is not the dependencies I want. I've been searching for an answer online and the only thing I've found so far that seems similar to what I need, but not quite, is this
AutoFixture was originally built as a tool for Test-Driven Development (TDD), and TDD is all about feedback. In the spirit of GOOS, you should listen to your tests. If the tests are hard to write, you should reconsider your API design. AutoFixture tends to amplify that sort of feedback, so my first reaction is to recommend reconsidering the design.
Concrete dependencies
It sounds like you need the dependencies to be specific implementations of an interface. If that's the case, the current design is pulling in the wrong direction, and also violating the Liskov Substitution Principle (LSP). You could, instead, consider refactoring to Concrete Dependencies:
public class MyTestClass : ISomeInterface<string>
{
// Constructor
public MyTestClass(ImplementationA implementationA,
ImplementationB implementationB,
ImplementationC implementationC)
{
// Some logic here
}
public void MethodA(string)
{
}
}
This makes it clear that MyTestClass depends on those three concrete classes, instead of obscuring the real dependencies.
It also has the added benefit of decoupling the design from a particular DI Container.
Zero, one, two, many
Another option is to adhere to the LSP, and allow any implementation of IMyTestInterface. If you do that, you shouldn't need the [Named] attributes any longer:
public class MyTestClass : ISomeInterface<string>
{
// Constructor
public MyTestClass(IMyTestInterface implementationA,
IMyTestInterface implementationB,
IMyTestInterface implementationC)
{
// Some logic here
}
public void MethodA(string)
{
}
}
A question that may arise from such a design is: how do I distinguish between each dependency? Most of my Role Hints article series deals with this question.
Three objects, however, is grounds for further reflection. In software design, my experience is that when it comes to cardinality of identical arguments, there are only four useful sets: none, one, two and many.
No arguments is basically a statement or axiom.
One argument indicates a unary operation.
Two arguments indicate a binary operation.
More than two arguments essentially just indicates a multitude. In mathematics and software engineering, there are preciously few ternary operations.
Thus, once you're past two arguments, the question is whether or not three arguments don't generalise to any number of arguments? If this is the case, a design might instead look like this:
public class MyTestClass : ISomeInterface<string>
{
// Constructor
public MyTestClass(IEnumerable<IMyTestInterface> deps)
{
// Some logic here
}
public void MethodA(string)
{
}
}
Or, if you can create a Composite from IMyTestInterface, you could even reduce it to something like this:
public class MyTestClass : ISomeInterface<string>
{
// Constructor
public MyTestClass(IMyTestInterface dep)
{
// Some logic here
}
public void MethodA(string)
{
}
}
All of these options makes the design clearer, and should also have the derived benefit of making it easier to test with AutoFixture.
FWIW, while I believe that my first answer is the best answer, here's a simple way to address the issue with AutoFixture, in case for some reason you can't change the design:
fixture.Register(() =>
new MyTestClass(new ImpA(), new ImpB(), new ImpC()));
There are other options as well, but none as simple as this one, I believe.
Is there anyway to force a downcast in the abstract base-class when the derived type is actually known there (due to complicated generics)?
Right now my ugly workaround is to implement an abstract protected property This that simply return this... so the usual stuff about downcasting not being possible due to "missing extra parts" dont apply, all parts are there, I just have to force the type-system to cut me some slack :/
protected abstract T This { get; }
I know the derived type, I just cant find any way to force the cast to happen!
Is there any way at all?
(This is part of a framework so forcing the consumer to implement silly things like a This-property really sucks. I would rather make the inner workings more complex, and force the consumer to write as little code as possible.)
Edit:
Since it's hard to see what good this would do, I will try to add some more code. It will still look weird perhaps, I will try to add more again if needed.
Basically in this specific problem part of the code it involves two methods in this fashion (actual implementation details omitted, just an illustration)
abstract class Base<DerivedType, OtherType>
where .... //Complicated constraints omitted
{
protected abstract OtherType Factory(DerivedType d);
public bool TryCreate(out OtherType o)
{
//Omitted some dependency on fields that reside in the base and other stuff
//if success...
o = Factory((DerivedType)this); //<-- what I can not do
return true;
}
}
(As for the bigger picture it is part of a strongly typed alternative to some of the things you can do with xpath, working on top of Linq2Xml.)
The following class definition compiles. I don't know whether your omitted constraints would conflict with this.
public abstract class Base<DerivedType, OtherType>
where DerivedType : Base<DerivedType, OtherType>
{
protected abstract OtherType Factory(DerivedType d);
public bool TryCreate(out OtherType o)
{
o = Factory ((DerivedType)this);
return true;
}
}
public class MyClass : Base<MyClass, string>
{
protected override string Factory (MyClass d)
{
return d.GetType ().Name;
}
}
I am trying to create an object that can call one of 3 methods in response to user input. Let's call them, DoShape, DoColor, and DoPowerTool. These methods exist as part of a base class and interfaces respectively:
Public Class PowerTool
{
...
public virtual void DoPowerTool()
{
...
}
}
Public Interface Shape
{
void DoShape(PowerTool p);
}
Public Interface Color
{
void DoColor(PowerTool p);
}
As DoShape and DoColor require a PowerTool, the intuitive thing would be to somehow bake those methods into the PowerTool instance itself.
Unfortunately, I can't have the PowerTool class implement Shape and Color directly, since I won't know until runtime if it's a Red Square Jackahmmer or a Green Ovoid one.
I've dabbled with C#'s delegates, and have a feeling that they might be useful in this case, but I'm not familiar enough to know how to implement them in this precise scenario, and I'm unsure if they're the right choice to give me the modular behavior I'm looking for. (I may be spoiled by Javascript in this regard, where you can simply overwrite functions on objects).
Any thoughts or suggestions on how to continue? I feel like there's probably a very simple solution that I'm overlooking in all this.
A relatively simple way to do what you're talking about (from what I can tell) would be to simply have 3 interfaces:
public interface IPowerTool : IShape, IColor
{
void Execute();
}
Thus, you can simply define:
public RedSquareJackhammer : IPowerTool
{
void DoShape() {}
void DoColor() {}
void Execute() {}
}
Another option is to do this:
public PowerTool
{
IColor color;
IShape shape;
public PowerTool(IColor c, IShape s) {
color = c; share = s;
}
void Execute() {
color.DoColor();
shape.DoShape();
}
}
Then you call it like this:
// JackHammer is derived from PowerTool, Red from IColor, Square from IShape
var redSquareJackhammer = new JackHammer(new Red(), new Square());
Etc.. there are many ways to skin this cat.
I believe what you are referring to is the concept of a Mixin.
This is where one or many implementations for an interface are defined, and then the implementations are reused for their concrete types.
This is unfortunately not really possible in .NET because the framework was desgined without that ability to use multiple inheritance. Though, you may be able to replicate the behavior you're seeking using extension methods or, through dependency injection and the strategy pattern.
If you have any further questions about the patterns, or require more applicable examples, just let me know what you're attempting to accomplish in more detail and I will do my best to help.
If I understand what you are describing, the following strategy pattern may be of benefit:
internal interface IPowerToolStrategy
{
void Execute(PowerTool powerTool);
}
public class RedSquareJackhammer : IShape
{
private readonly IPowerToolStrategy _strategy;
internal RedSquareJackhammer(IPowerToolStrategy strategy)
{
_strategy = strategy;
}
public void DoShape(PowerTool powerTool)
{
_strategy.Execute(powerTool);
}
}
I want to evaluate a T parameter to perform a common behavior.
I was trying to do call this method from differents buttons
private void Execute<T>(string strValue)
{
//Do operations
this.SaveObject<T>();
}
Button1
this.Execute<Employee>("somevalue1");
Button2
this.Execute<Supplier>("somevalue2");
but then the problem is when I want to define the SaveObject method at that point how can I evaluate the T. I tried this but I tells me the T is a parameter and I'm using it as a variable.
private void SaveObject<T>()
{
//Here the problem
if(T is Employee)
{
//Do something
}
if(T is Supplier)
{
//Do something
}
}
I want to know what kind of type is and then do my specific operations. All the objects inherit EntityObject
------EDIT------
At the moment of the question, the only thing that I needed to fix my problem was the "answer comment" from Silvermind. (typeof(T)) Then I took the approach from many of you to improve the architecture.
If Silvermind would have aswered my question as answer more than a comment, that would have been my accepted answer.
Anyway, thanks to all of you guys.
HighCore is correct, if you want to implement this functionality, your best choice would be to create an abstract base class with the supported virtual methods and then override them in type-specific classes which inherit from the abstract base class. Something similar to:
public abstract class BaseManager<T> where T : class {
public virtual void SaveObject() {
// Some common save logic if it can be done
}
}
public class EmployeeManager : BaseManager<Employee> {
public override void SaveObject()
{
// Your save logic
}
}
Hope this helps! Good luck!
You can use typeof(T) from within your generic method.
EDIT
To give clarification (for those people who love downvoting :-) ), you can then use this information in your method as follows:
private void SaveObject<T>()
{
//Here the problem
if (typeof(Employee).IsAssignableFrom(typeof(T)))
{
//Do something
}
}
Apologies for not being as explicit before.
If you find yourself writing generic code where you are saying
if (typeof(T)==typeof(SomeType))
Most likely there is some error in your logic. You might want to do method overloading. If you only know how to handle SomeType and SomeOtherType then why not have a Save(SomeType), Save(SomeOtherType).
If you can maybe you can make your types conform to an Interface or have a base class. That way you can redefine it like so and move the effort of saving the item on itself and keep all the prep and post logic in the handler thread::
void Save<T>(T item) where T:ICanSave
{
//prep code here
item.Save()
//finalize code here
}
Of course, perhaps your object doesn't need to know how to save itself, so you may want to move the implementation into a provider so that there is a SaveProvider<T>, and so any arbitrary item can be saved provided somebody sends you a provider...
void Save<T>(T item,SaveProvider<T> provider){
//prep code here
provider.Save(item)
//finalize code here
}
Of course you can probably default this stuff too.
I had a class that had lots of methods:
public class MyClass {
public bool checkConditions() {
return checkCondition1() &&
checkCondition2() &&
checkCondition3();
}
...conditions methods
public void DoProcess() {
FirstPartOfProcess();
SecondPartOfProcess();
ThirdPartOfProcess();
}
...process methods
}
I identified two "vital" work areas, and decided to extract those methods to classes of its own:
public class MyClass {
private readonly MyClassConditions _conditions = new ...;
private readonly MyClassProcessExecution = new ...;
public bool checkConditions() {
return _conditions.checkConditions();
}
public void DoProcess() {
_process.DoProcess();
}
}
In Java, I'd define MyClassConditions and MyClassProcessExecution as package protected, but I can't do that in C#.
How would you go about doing this in C#?
Setting both classes as inner classes of MyClass?
I have 2 options: I either define them inside MyClass, having everything in the same file, which looks confusing and ugly, or I can define MyClass as a partial class, having one file for MyClass, other for MyClassConditions and other for MyClassProcessExecution.
Defining them as internal?
I don't really like that much of the internal modifier, as I don't find these classes add any value at all for the rest of my program/assembly, and I'd like to hide them if possible. It's not like they're gonna be useful/reusable in any other part of the program.
Keep them as public?
I can't see why, but I've let this option here.
Any other?
Name it!
Thanks
Your best bet is probably to use partial classes and put the three clumps of code in separate files adding to the same class. You can then make the conditional and process code private so that only the class itself can access them.
For "Helper" type classes that aren't going to be used outside the current assembly, Internal is the way to go if the methods are going to be used by multiple classes.
For methods that are only going to be used by a single class, I'd just make them private to the class, or use inner classes if it's actually a class that's not used anywhere else. You can also factor out code into static methods if the code doesn't rely on any (non-static) members of your class.
I can
define MyClass as a partial class,
having one file for MyClass, other for
MyClassConditions and other for
MyClassProcessExecution.
Maybe it's my C++ background, but this is my standard approach, though I bundle small helper classes together into a single file.
Thus, on one of my current projects, the Product class is split between Product.cs and ProductPrivate.cs
I'm going for something else - the issue of public / protected / private may not be solved specifically by this, but I think it lends itself much better to maintenance then a lot of nested, internal classes.
Since it sounds like you've got a set of steps in a sequential algorithm, where the execution of one step may or may not be dependent upon the execution of the previous step. This type of sequential step processing can sometimes use the Chain of Responsibility Pattern, although it is morphed a little bit from its original intention. Focussing only on your "processing method", for example, starting from something like below:
class LargeClass
{
public void DoProcess()
{
if (DoProcess1())
{
if (DoProcess2())
{
DoProcess3();
}
}
}
protected bool DoProcess1()
{
...
}
protected bool DoProcess2()
{
...
}
protected bool DoProcess3()
{
...
}
}
Using Chain of Responsibility, this could be decomposed into a set of concrete classes for each step, which inherit from some abstract step class. The abstract step class is more responsible for making sure that the next step is called, if the necessary preconditions are met.
public class AbstractStep
{
public AbstractStep NextStep { get; set; }
public virtual bool ExecuteStep
{
if (NextStep != null)
{
return NextStep.ExecuteStep();
}
}
}
public class ConcreteStep1 : AbstractStep
{
public bool ExecuteStep
{
// execute DoProcess1 stuff
// call base
return base.ExecuteStep();
}
}
...
public class ConcreteStep3 : AbstractStep
{
public bool ExecuteStep
{
// Execute DoProcess3 stuff
// call base
return true; // or false?
}
}
To set this up, you would, in some portion of the code, do the following:
var stepOne = new ConcreteStep1();
var stepTwo = new ConcreteStep2();
var stepThree = new ConcreteStep3();
stepOne.NextStep = stepTwo;
stepTwo.NextStep = stepThree;
bool success = stepOne.ExecuteStep();
This may help clean up the code bloat you've got in your single class - I've used it for a few sequential type algorithms in the past and its helped isolate each step nicely. You could obviously apply the same idea to your condition checking (or build them into each step, if that applies). You can also do some variation on this in terms of passing state between the steps by having the ExecuteStep method take a parameter with a state object of some sort.
Of course, if what you're really concerned about in this post is simply hiding the various steps, then yes, you could make each of your substeps a protected class within your class that creates the steps. Unless you're exposing your library to customers in some form or fashion however, and you don't want them to have any type of visibility into your execution steps, this seems to be a smaller concern then making the code maintainable.
Create the classes with the same access modifier as the methods you have refactored. Partial classes are only really usefull when you have multiple people or automat5ed code generating tools frequently modifying the same classes. They just really avoid source merge hell where your source controll mashes your code because it can't merge multiple edits to the same file.