Lazy initialization of dependencies injected into constructor - c#

I have a class where I am injecting two service dependencies. I am using Unity container.
public interface IOrganizer
{
void Method1();
void Method2();
void Method3();
}
public class Organizer : IOrganizer
{
private IService1 _service1;
private IService2 _service2;
public Organizer(Iservice1 service1, IService2 service2)
{
_service1 = service1;
_service2 = service2;
}
public void Method1()
{
/*makes use of _service1 and _service2 both to serve the purpose*/
}
public void Method2()
{
/*makes use of only _service1 to serve the purpose*/
}
public void Method3()
{
/*makes use of only _service2 to serve the purpose*/
}
}
While it all works, but somehow it smells because when I am only invoking Method2 and Method3, unity unnecessarily creates an instance of another not required service. The code snippet here is just a sample for explanation purposes. In a real situation object graph of these injected services itself is quite deep.
Is there a better way to design and address this kind of scenario?

I think your sense of smell is competent. Most people would happily code like this without a second thought. I do agree, though, that there's a few code smells in a design like outlined in the OP.
I'd like to point out that I use the term code smell as in Refactoring. It's an indication that something may not be right, and it might be worthwhile to investigate further. Sometimes, such investigation reveals that there are good reasons that the code is as it is, and you move on.
There's at least two separate smells in the OP. They're unrelated, so I'll treat each one separately.
Cohesion
A fundamental, but often forgotten concept of object-oriented design is that of cohesion. Think of it as a counter-force to separation of concerns. As Kent Beck once put it (the exact source escapes me, so I paraphrase), things that vary together, belong together, while things that vary independently should be separated.
Without the 'force' of cohesion, the 'force' of separation of concerns would pull code apart until you have extraordinarily small classes, and even simple business logic is spread across multiple files.
One way to look for cohesion, or lack thereof, is to 'count' how many class fields are being used by each method of a class. While only a crude indicator, it does trigger our sense of smell in the OP code.
Method1 uses both class fields, so is no cause for concern. Both Method2 and Method3, on the other hand, use only half of the class fields, so we could view that as indication of poor cohesion - a code smell, if you will.
How can you address that? Again, I wish to emphasise that a smell isn't guaranteed to be bad. It's only a reason to investigate.
Still, if you want to address the issue, I can't think of any other way than breaking up the class into several smaller classes.
The Organizer class in the OP implements the IOrganizer interface, so technically, breaking up Organizer is only possible if you can also break up the interface - although you could write a Facade, and then delegate each method to a separate class that implements that particular method.
Still, the presence of an interface emphasises the importance of the Interface Segregation Principle. I often see code bases exhibit this particular problem because the interfaces are too big. If at all possible, make the interfaces as small as possible. I tend to take it to the extreme and define only a single member on each interface.
From another of the SOLID principles, the Dependency Inversion Principle, follows that interfaces should be defined by the clients that use them, not the classes that implement them. Designing interfaces like that often enables you to keep them small, and to the point.
Recall also that a single class can implement multiple interfaces.
Performance
Another concern regarding the design in the OP is of performance, although I agree with NightOwl888's comment that you're likely in micro-optimisation territory.
In general, you can compose even large object graphs with confidence. As NightOwl888 also suggests in the comments above, if a dependency has Singleton lifetime, it makes little difference if you inject it, but then end up not using it.
Even if you can't give a dependency like _service2 Singleton lifetime, I again agree with NightOwl888 that object creation in .NET is fast to the point where you almost can't measure it. And as he also points out, Injection Constructors should be simple.
Even in the rare case where a dependency must have Transient lifetime, and for whatever reason creating an instance is expensive, you can always hide that dependency behind a Virtual Proxy, as I also describe in the article about object graphs.
How you configure all that in Unity, I no longer remember, but if Unity can't deal with that, choose another method of composition, preferably Pure DI.

As long as you're using Unit 3 or higher, you don't need to do anything special for resolving lazy.
You register your type like you normally would:
container.RegisteryType<IMyInterface>()...;
And then change the constructor to require lazy:
public class MyClass
{
public Lazy<IMyInterface> _service1;
public MyClass(Lazy<IMyInterface> service1)
{
_serivce1 = service1;
}
}
Then call whatever you method you need:
_service1.Value.MyMethod();

Related

Doesn't Dependency Injection simply move the issue elsewhere?

So Dependency Injection is recommended usually to help with unit testing, to solve the problem of having a class dependent on other classes. This sounds great, but let me walk through the issue I'm facing.
Here is a regular implementation without DI:
class Upper{
Middle middle = new Middle();
}
class Middle{
Lower lower = new Lower();
}
class Lower{
}
Now let's start at the bottom. Middle depends on Lower, we don't really want that, so we'll create a redundant interface for Lower and pass that to the constructor of Middle instead:
class Middle{
Lower lower;
public Middle(ILower lower){
this.lower = lower;
}
}
interface ILower{
}
class Lower : ILower{
}
Well this sounds great, right? Well not really. Most examples I've seen stop here, forgetting that something needs to use the class Middle. Now we have to update Upper to be compatible:
class Upper{
Middle middle = new Middle(new Lower());
}
This doesn't seem very useful... All that we've done is moved the problem up a level, and created an unusual dependency: Upper depends on Lower now? That's definitely not an improvement.
I must be missing the benefit, but it seems like DI just moved the issue rather than solved it. In fact, it also makes the code harder to understand.
Also, here is a "full" implementation:
interface IUpper {
}
class Upper : IUpper {
Middle middle;
public Upper(IMiddle middle){
this.middle = middle;
}
}
interface IMiddle {
}
class Middle : IMiddle {
Lower lower;
public Middle(ILower lower){
this.lower = lower;
}
}
interface ILower {
}
class Lower : ILower {
}
But again, I'm just moving the problem. To use Upper, I need:
new Upper(new Middle(new Lower()));
and now I depend on 3 classes!
Dependency injection simply refers to the way you create classes, so that their dependencies are provided for them ("injected" into them) instead of classes creating instances of their own dependencies.
Does DI just move the problem of creating class instances somewhere else?
Yes, that's exactly what it does, and that's good.
Every class instance that you use has to be instantiated somewhere. The question is where that instantiation takes place, and whether it makes your classes more or less manageable and testable.
The trade-off is that if one class directly creates instances of other classes it depends on, then sure, calling the constructor for that outer class is much simpler. You create that class, and it creates a bunch of other classes, and they create more classes, and so on. But each class that directly creates other classes becomes harder and harder to unit test. You can't write a test for just the behavior of one class when that behavior includes the behavior of the classes it creates, the classes they create, and so on. So in return for simpler constructor calls, you get code which is just about impossible to test and also very difficult to maintain.
Dependency injection moves creation of a class's dependencies out of the class. That makes each individual class easier to test and maintain, but it creates a different problem, as you've observed. Now your constructor calls are much more complicated, creating all sorts of nested dependencies.
What solves that problem is a dependency injection container, also called an IoC container. Some examples are Windsor, Autofac, Unity, and there are many more. With these, you simply specify an implementation for each individual interface on which any class might depend.
For example (this is Windsor syntax, but they're all somewhat similar)
container.Register(Component.For<InterfaceA, ImplementationToUse>());
container.Register(Component.For<InterfaceB, ImplementationForThis>());
container.Register(Component.For<InterfaceC, ImplementationToUse>());
Then, if you call
var thingINeed = container.Resolve<InterfaceA>();
(That's not actually how we get a class instance from a container, but that's another story.)
It's going to figure out which classes it needs to create to pass the the constructor of the implementation. If those classes have more dependencies, it will create those, and so on, and so on.
So now you've got the best of both worlds. You can create as many small, testable classes as you want, with tons of nested dependencies, all depending on abstractions. If you were to try to call their constructors directly it would be insanely complicated, way beyond the example in your question. But you don't have to do that. You can just think of each class individually - what does it do, and what interfaces does it directly depend on?
You still have some complexity. Instead of calling a bunch of constructors you now have to register individual dependencies with a container. But it's a good trade-off, and you come out ahead because your classes are decoupled and testable.
When you inject dependencies, you are not exempt from the need to instantiate classes. You take the creation of dependencies out of dependent classes so that they do not depend on their specific implementation (calling the constructor of the aggregated class).
Yes, putting dependencies on the upper level, you get this line:
new Upper(new Middle(new Lower()));
To create an object, you need to resolve the tree of its dependencies. To facilitate this work, there are many IoC containers. They allow you to reduce the recording to this:
var upper = iocContainer.resolve<IUpperInterface>();
As advantages, you get class testability, flexibility and re-useability.

Is the purpose of Dependency Injection pattern lost when you cast the parent interface to its child in C#?

I have a Data Repository interface called IRepository. BakerRepository inherits the generic CRUD methods from it. Now for BakerRepository, it may have some methods that are special to itself. For example, a method called Bake.
I am using Unity.Mvc for the container of the dependencies. This was how I originally use it in the controller, which I learned from a tutorial that I read a few days ago:
private IRepository<Baker, int> _repository;
public BakersController(IRepository<Baker, int> repo)
{
_repository = repo;
}
The container will basically give me the BakerRepository implementation. However, when I try to use the Bake method, which is unique to BakerRepository, it returns an error because _repository is of type IRepository<Baker, int>, and thus knows no Bake method.
So I tried this implementation instead:
private BakerRepository _repository;
public BakersController(IRepository<Baker, int> repo)
{
_repository = repo as BakerRepository;
}
I don't fully understand the DI pattern, I'm only using it now because I learned it as a part of a tutorial about data repositories in ASP.Net MVC. I read up about it and I thought it's actually a good design pattern so I decided to keep using it, although I don't get it a hundred percent.
Now I'm wondering if I rendered the purpose dependency injections useless if I do the implementation this way. I don't understand DI pattern enough, and I just couldn't find an exact answer elsewhere.
Casting the IRepository<Baker, int> in the constructor to BakerRepository violates at least two out of 5 SOLID principles:
Open/Closed Principle is violated, because this will cause changes to a different part of the system (the replacement or decoration of the repository for instance) to cause sweeping changes throughout the system, since you might be using the BakerRepository in many places.
Interface Segregation Principle is likely violated, because it is unlikely that your BakersController uses all BakerRepository method.
Dependency Inversion Principle is violated, because your BakersController depends directly on a concrete type, instead of an abstraction. This makes it harder to change and evolve implementations independently.
None of these problems can be solved by changing the the IRepository<Baker, int> parameter to BakersRepository. Instead, you should break out this special Bake method and place it behind its own abstraction, for instance:
public interface IBakeHandler
{
BakeResults Bake([parameters]);
}
You can mark the BakeRepository with this new IBakeHandler interface as well:
class BakeRepository : IRepository<Bake>, IBakeHandler
{
}
This allows you to let the BakeController to depend on IBakeHandler instead:
private IBakeHandler _bakeHandler;
public BakersController(IBakeHandler bakeHandler)
{
_bakeHandler = bakeHandler;
}
This prevents violating of the SOLID principles, because:
The replacement of the implementation with a proxy, decorator or adapter will not ripple through the system; the BakerController is unaffected by such change.
The IRepository<T> and especially the IBakeHandler stay narrow, making it much easier to create decorators to apply cross-cutting concerns or to create mock/stub/fake implementations for testing.
Repository and IBakeHandler implementations can be placed in assemblies that are unreferenced by the assembly that holds the controller.
Do note though that every time you break open such repository implementation to add new features you are effectively violating the Open/Closed principle and probably the Single Responsibility Principle as well.
In case you have many of those 'extra' repository features, you will start to see many one-method interfaces like IBakeHandler. Once you see this happening, extract new generic abstractions out of these interfaces. You can apply well-known patterns such as described here and here.
The right answer is to actually pass the BakerRepository to your BakerController, because that's what it depends on.
Dependency injection is, as you say, a very useful pattern. However, it is just a tool to help build your objects when you've properly extracted the dependencies of your classes. In a non framework setting you'd be responsible for building these objects, and you'd quickly tire of passing in loads of parameters. DI helps you with that. But it is optional. So, if you did it by hand, you'd always construct the BakerController with a BakerRepository passed in. Therefore the same logic would apply when you're using DI.
In direct answer to your question. You could upcast your dependency but that would have no bearing on what D\I does for you.
Breaking out your dependencies CAN be useful for TDD also, isolating your external dependencies allows for unit testing of a class without exercising relatively expensive I\0 calls or disruptive calls.
Also breaking out dependencies allows you to focus the responsibilities of an object and having object doing one thing well.
In practice I have rarely seen or used D\I (implemented off the back of an IOC container) to change concrete implementations, having only done it recently for feature toggling
Another point is that objects which are not external I\0 could feasibly be newed up and still tested either sociably or in a solitary manner.
You shouldn't necessarily D\I everything....

IoC containers: slightly differ the structure of instance created

I am studying IoC, DDD and AOP concepts. I've read a number of articles, docs, Ninject manual (i'm restricted to use .NET 3.5), tried some stuff and so on.
It's hard to shove everything at once to my head, but motivation, concepts and technical matters are somewhat clear. And i'd been always feeling that i was missing something.
Firstly, as i understand IoC containers' purpose is initial object structure set up?
Like, set up container in composition root, create "main" object, that is being wired all the way by IoC container.
Then, as i understand, later all objects are instantiated with factories? Although i can't help myself to perceive factory as a case of service locator (that ended up considered antipattern and by the way is used as core mechanics of IoC containers).
So the question is:
What if i want to create an instance with slightly different structure, e.g. i have
interface IFoo{}
interface IBar{}
class SomeClass
{
SomeClass(IFoo obj){...}
}
class Foo : IFoo
{
Foo(IBar obj){...}
}
class Bar : IBar
{
}
class FooBar : IBar // also implements IBar interface
{
}
So, initial binding configuration is making SomeClass.Foo.Bar structure. Assume, i also need SomeClass.Foo.FooBar. What do i do? The options i can think of:
reconfigure bindings 'in place': just no.
have a constructor parameter for top class, that has configuration for whole structure. that's pretty awful. aside from the fact that all subsequent constructors (and all other project classes constructors, in the end, i'm sure) will have to have one more parameter, it is not clearly seen, how it will function and not use some dirty tricks.
substitute what is needed after object was created. it either breaks law of demeter (about which i'm not concerned too much, but in this case it's too rude) and a couple of other principles or, in general case, isn't possible at all.
use factory that is configured somehow. it just defers/transfers the need itself to later/other place in code
use some kind of contextual/conventional binding. one solution i see (didn't test yet) it's to go all the way to the top of "activation root", check, what's creating the hierarchy. m.b. we'll have to make decorator for top level class, for container to check its type and behave accordingly. actually, container may be configured in a manner, that it decides, what concrete instance to inject by "parsing" top level interface's name. something like
ICfg_ConcreteType1_ConcreteType2_...
the problems here (besides that it looks like hack):
a) we must introduce some mnemonic system, which is not obscene/user friendly.
b) we must have rules/decorators for every factory with this "feature" (but looks like we can somewhat simplify the process at least with rules)
c) it resembles me of reflection usage with convention over configuration, which i'm averted of and treat it as a hack.
Or we may use attributes to set this up. Or may be i just don't know something.
Firstly, as i understand IoC containers' purpose is initial object structure set up?
Forget about IoC containers for a moment. Dependency Injection is not about using tools. It's first and foremost about applying principles and patterns. The driving force behind Dependency Injection are the SOLID principles. I would even go as far as start your application without using an IoC container at all, and only start using one when is a really compelling reason to do so. This means that you simply build up the object graphs by hand. The right place to do this is in your Composition Root. This should be the single place where you compose your object graphs.
And do note that this advice comes from someone who is building and maintaining a IoC container himself.
Then, as i understand, later all objects are instantiated with factories?
When practicing Dependency Injection, you will see that the need to use factories actually minimizes. They can still be useful, but I only use them sparingly nowadays.
Reason for this is that a factory usually just adds an extra (useless) layer of abstraction.
When starting with making code more loosely coupled, developers are tempted to use a factory as follows:
public class SomeClass
{
public void HandleSomething() {
IFoo foo = FooFactory.Create();
foo.DoSomething();
}
}
Although this allows a Foo implementation to be decoupled from SomeClass, SomeClass still takes a strong dependency on FooFactory. This still makes SomeClass hard to test, and lowers reusability.
After experiencing such a problem, developers often start to abstract away the FooFactory class as follows:
public class SomeClass
{
private readonly IFooFactory fooFactory;
public SomeClass(IFooFactory fooFactory) {
this.fooFactory = fooFactory;
}
public void HandleSomething() {
IFoo foo = this.fooFactory.Create();
foo.DoSomething();
}
}
Here a IFooFactory abstraction is used, which is injected using constructor injection. This allows SomeClass to be completely loosely coupled.
SomeClass however now has two external dependencies. It both knows about IFooFactory and IFoo. This duplicates the complexity of SomeClass, while there is no compelling reason to do so. We will immediately notice this increase of complexity when writing unit tests. We will now suddenly have to mock two different abstactions and test them both.
Since we are practicing constructor injection here, we can simplify SomeClass -without any downsides- to the following:
public class SomeClass
{
private readonly IFoo foo;
public SomeClass(IFoo foo) {
this.foo = foo;
}
public void HandleSomething() {
this.foo.DoSomething();
}
}
Long story short, although the Factory design pattern is still valid and useful, you will hardly ever need it for retrieving injectables.
Although i can't help myself to perceive factory as a case of service locator
No. A factory is not a service Locator. The difference between a factory and a locator is that with a factory you can build up only one particular type of objects, while a locator is untyped. You can build up anything. If you use an IoC container however, you will often see that the factory implementation will forward the request to the container. This should not be a problem, because your factory implementation should be part of your composition root. The composition root always depends on your container and this is not a form of Service Location, as Mark Seemann explains here.
Or we may use attributes to set this up. Or may be i just don't know something.
Refrain from using attributes for building up object graphs. Attributes pollute your code base and cause a hard dependency on an underlying technology. You absolutely want your application to stay oblivious to any used composition tool. As I started with, you might not even use any tool at all.
For instance, your object graph can be composed quite easily as follows:
new SomeClass(
new Foo(
new Bar()));
In your example, you seem to have two IBar implementations. From the context it is completely unclear what the function of this abstraction and these implementations are. I assume that you want to be able to switch implementations one some runtime condition. This can typically be achieved by using a proxy implementation. In that case your object graph would look as follows:
new SomeClass(
new Foo(
new BarProxy(
new Bar(),
new FooBar()));
Here BarProxy looks as follows:
public class BarProxy
{
private readonly IBar left;
private readonly IBar right;
public BarProxy(IBar left, IBar right) {
this.left = left;
this.right = right;
}
public void BarMethod(BarOperation op) {
this.GetBar(op).BarMethod(op);
}
private IBar GetBar(BarOperation op) {
return op.SomeValue ? this.left : this.right;
}
}
It's hard to say when you should start using a DI container. Some people like to stay away from DI containers almost always. I found that for the type of applications I build (that are based on these and these patterns), a DI container becomes really valuable, because it saves you from having to constantly update your Composition Root. In other words:
Dependency Injection and the SOLID principles help making your application maintainable. A DI library will help in making your composition root maintainable, but only after you made your application maintainable using SOLID and DI.
You would generally use some sort of tag system.
http://www.ghij.org/blog/post/2014/05/19/how-to-tag-classes-to-determine-which-to-reflect-with-mef.aspx

Strategy pattern and "action" classes explosion

Is it bad policy to have lots of "work" classes(such as Strategy classes), that only do one thing?
Let's assume I want to make a Monster class. Instead of just defining everything I want about the monster in one class, I will try to identify what are its main features, so I can define them in interfaces. That will allow to:
Seal the class if I want. Later, other users can just create a new class and still have polymorphism by means of the interfaces I've defined. I don't have to worry how people (or myself) might want to change/add features to the base class in the future. All classes inherit from Object and they implement inheritance through interfaces, not from mother classes.
Reuse the strategies I'm using with this monster for other members of my game world.
Con: This model is rigid. Sometimes we would like to define something that is not easily achieved by just trying to put together this "building blocks".
public class AlienMonster : IWalk, IRun, ISwim, IGrowl {
IWalkStrategy _walkStrategy;
IRunStrategy _runStrategy;
ISwimStrategy _swimStrategy;
IGrowlStrategy _growlStrategy;
public Monster() {
_walkStrategy = new FourFootWalkStrategy();
...etc
}
public void Walk() { _walkStrategy.Walk(); }
...etc
}
My idea would be next to make a series of different Strategies that could be used by different monsters. On the other side, some of them could also be used for totally different purposes (i.e., I could have a tank that also "swims"). The only problem I see with this approach is that it could lead to a explosion of pure "method" classes, i.e., Strategy classes that have as only purpose make this or that other action. In the other hand, this kind of "modularity" would allow for high reuse of stratagies, sometimes even in totally different contexts.
What is your opinion on this matter? Is this a valid reasoning? Is this over-engineering?
Also, assuming we'd make the proper adjustments to the example I gave above, would it be better to define IWalk as:
interface IWalk {
void Walk();
}
or
interface IWalk {
IWalkStrategy WalkStrategy { get; set; } //or something that ressembles this
}
being that doing this I wouldn't need to define the methods on Monster itself, I'd just have public getters for IWalkStrategy (this seems to go against the idea that you should encapsulate everything as much as you can!)
Why?
Thanks
Walk, Run, and Swim seem to be implementations rather than interfaces. You could have a ILocomotion interface and allow your class to accept a list of ILocomotion implementations.
Growl could be an implementation of something like an IAbility interface. And a particular monster could have a collection of IAbility implementations.
Then have an couple of interfaces that is the logic of which ability or locomotion to use: IMove, IAct for example.
public class AlienMonster : IMove, IAct
{
private List<IAbility> _abilities;
private List<ILocomotion> _locomotions;
public AlienMonster()
{
_abilities = new List<IAbility>() {new Growl()};
_locomotion = new List<ILocomotion>() {new Walk(), new Run(), new Swim()}
}
public void Move()
{
// implementation for the IMove interface
}
public void Act()
{
// implementation for the IAct interface
}
}
By composing your class this way you will avoid some of the rigidity.
EDIT: added the stuff about IMove and IAct
EDIT: after some comments
By adding IWalk, IRun, and ISwim to a monster you are saying that anything can see the object should be able to call any of the methods implemented in any of those interfaces and have it be meaningful. Further in order for something to decide which of the three interfaces it should use you have to pass the entire object around. One huge advantage to using an interface is that you can reference it by that interface.
void SomeFunction(IWalk alienMonster) {...}
The above function will take anything that implements IWalk but if there are variations of SomeFunction for IRun, ISwim, and so on you have to write a whole new function for each of those or pass the AlienMonster object in whole. If you pass the object in then that function can call any and all interfaces on it. It also means that that function has to query the AlienMonster object to see what its capabilities are and then decide which to use. All of this ends up making external a lot of functionality that should be kept internal to class. Because you are externalizing all of that and there is not commonality between IWalk, IRun, ISwim so some function(s) could innocently call all three interfaces and your monster could be running-walking-swimming at the same time. Further since you will want to be able to call IWalk, IRun, ISwim on some classes, all classes will basically have to implement all three interfaces and you'll end up making a strategy like CannotSwim to satisfy the interface requirement for ISwim when a monster can't swim. Otherwise you could end up trying call an interface that isn't implemented on a monster. In the end you are actually making the code worse for the extra interfaces, IMO.
In languages which support multiple inheritance, you could indeed create your Monster by inheriting from classes which represent the things it can do. In these classes, you would write the code for it, so that no code has to be copied between implementing classes.
In C#, being a single inheritance language, I see no other way than by creating interfaces for them. Is there a lot of code shared between the classes, then your IWalkStrategy approach would work nicely to reduce redundant code. In the case of your example, you might also want to combine several related actions (such as walking, swimming and running) into a single class.
Reuse and modularity are Good Things, and having many interfaces with just a few methods is in my opinion not a real problem in this case. Especially because the actions you define with the interfaces are actually different things which may be used differently. For example, a method might want an object which is able to jump, so it must implement that interface. This way, you force this restriction by type at compile time instead of some other way throwing exceptions at run time when it doesn't meet the method's expectations.
So in short: I would do it the same way as you proposed, using an additional I...Strategy approach when it reduces code copied between classes.
From a maintenance standpoint, over-abstraction can be just as bad as rigid, monolithic code. Most abstractions add complication and so you have to decide if the added complexity buys you something valuable. Having a lot of very small work classes may be a sign of this kind of over-abstraction.
With modern refactoring tools, it's usually not too difficult to create additional abstraction where and when you need it, rather than fully architecting a grand design up-front. On projects where I've started very abstractly, I've found that, as I developed the concrete implementation, I would discover cases I hadn't considered and would often find myself trying to contort the implementation to match the pattern, rather than going back and reconsidering the abstraction. When I start more concretely, I identify (more of) the corner cases ahead of time and can better determine where it really makes sense to abstract.
"Find what varies and encapsulate it."
If how a monster walks varies, then encapsulate that variation behind an abstraction. If you need to change how a monster walks, you have probably have a state pattern in your problem. If you need to make sure that the Walk and Growl strategies agree, then you probably have an abstract factory pattern.
In general: no, it is definitely not over-engineering to encapsulate various concepts into their own classes. There is also nothing wrong with making concrete classes sealed or final, either. It forces people to consciously break encapsulation before inheriting from something that probably should not be inherited.

How does this "Programming to Interfaces" thing work?

I like the idea of "programming to interfaces" and avoiding the use of the "new" keyword.
However, what do I do when I have two classes that have the same interface but are fundamentally different to set up. Without going into detail about my specific code, I have an interface with a method, "DoStuff". Two classes implement this interface. One is very simple and requires no initialisation to speak of. The other has five different variables that need to be set up. When combined, they allow for literally millions of ways for the class to work when DoStuff is called.
So when do I "new" these classes? I though about using factories but I don't think they are suitable in this case because of the vast difference in setup. (BTW: there are actually about ten different classes using the interface, each allowing the formation of part of a complex pipeline and each with different configuration requirements).
I think you may be misunderstanding the concept of programming to interfaces. You always have to use the new keyword in object oriented languages to create new instances of objects. Just because you program to interfaces doesn't remove that requirement.
Programming to an interface simply means that all your concrete classes have their behavior defined in an interface instead of in the concrete class itself. So when you define the type of a variable, you define it to be the interface instead of a concrete type.
In your case, just implement DoStuff in your concrete classes as each class needs it implemented (whether doing it simply or with 10 other initialized objects and setup). For example, if you have an interface IInterface and class SomeClass which implements IInterface. You might declare an instance of SomeClass as such:
IInterface myInstance = new SomeClass();
This allows you to pass this instance around to other functions without having to have those functions worry about the implementation details of that instance's class.
Well you really have 3 options. Use new, use a factory or use an DI container. With a DI container your five variables would most likely need to be in a configuration file of some sorts.
But to be completely honest it sounds like you're making your life harder than it needs to be by forcing yourself into a corner. Instead of coding to some ideal, rather code in a manner which best facilitates solving the problem at hand. Not saying you should do a hack job of it, but really, saying you don't want to use new, that is really making your life harder than it needs to be...
Regardless of what you use, at some point you're going to have to construct instances of your classes in order to use them, there's no way around that.
How to go about doing that depends on what you want to accomplish, and the semantics of those classes.
Take the class you mention with those fields.
Can those fields be read from somewhere? A configuration file, as an example? If so, perhaps all you need is just a default constructor that initializes those fields from such a configuration file.
However, if the content of those fields really needs to be passed in from the outside world, there's no way around that.
Perhaps you should look at a IoC container and Dependency Injection?
If you are passing that many configuration parameters into your class it may have too many responsibilities. You should look into breaking it up into smaller classes that only have a single responsibility.
Avoiding the new keyword can be valuable because it creates a dependancy on the implementing class. A better solution would be to use Dependancy Injection.
for example
public interface IDoStuff
{
void DoStuff();
}
public class DoStuffService
{
private IDoStuff doer;
public DoStuffService()
{
//Class is now dependant on DoLotsOfStuff
doer = new DoLotsOfStuff(1,true, "config string");
}
}
public class DoStuffBetterService
{
private IDoStuff doer;
//inject dependancy - no longer dependant on DoLotsOfStuff
public DoStuffBetterService(IDoStuff doer)
{
this.doer = doer;
}
}
Obviously you still have to create the IDoStuff object being passed in somewhere.
An Inversion of Control (IoC) container is a good tool to help with implementing this.
Here is a good tutorial for Castle Windsor Container if you are interested in learning more. (There are many other IoC containers, I just happen to use this one.)
The example in your question was very abstract, so I hope this answer is helpful.
If I understand you correctly the problem is with different initialization. You need to provide for two classes that have the same interface. One does not need anything, and the other needs some paramaters and calls some complex initialization.
You should use have a constructor that gets InitializationParameter. Both classes should get it. One with a simple interface that does not need to get anything from it. The other that needs params and will get them from it.
If you are concerned about initialization you can use factory, just ask it for some interface providing this init parameter and factory will create, init and return to you the object according to the values you provided.
If something is not clear - please ask.

Categories

Resources