Using Ninject with ServiceLocater pattern - good or bad [closed] - c#

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 8 years ago.
Improve this question
I'm about to get used to Ninject. I understand the principles of Dependency Injection and I know how to use Ninject. But I'm a little confused right now. The opinions drift apart when it comes to the Service Locator pattern.
My application is built upon a strictly modular basis. I try to use constructor injection as much as I can and this works pretty well, though it's a little bit messy (in my opinion).
Now when an extension (external code) wants to benefit from this system, wouldn't it be required to have access to the kernel? I mean right now I have one static class which gives access to all subsystems of my application. An alternative would be having access to the kernel (Service Locator pattern) and grabbing the subsystem dependencies from this one.
Here I can easily avoid giving access to the kernel or to be more explicit, not allowing dependencies to the kernel.
But if an extension now wants to use any components (interfaces) of my application, from any subsystem, it would be required to have access to the kernel in order to resolve them because Ninject does not automatically resolve as long as you're not using "kernel.Get()", right?
Peww, it's really difficult explaining this in an understandable way. I hope you guys get what I'm aiming for.
Why is it so "bad" having a dependency to the kernel or a wrapper of it? I mean, you can't avoid all dependencies. For example I still have the one to my "Core" class which gives access to all subsystems.
What if an extension wants to register it's own module for further usage?
I can't find any good answer for why this should be a bad approach, but I read it quite often. Moreover it is stated that Ninject does NOT use this approach unlike Unity or similiar frameworks.
Thanks :)

There are religious wars about this...
The first thing that people say when you mention a Service Locator is: "but what if I want to change my container?". This argument is almost always invalid given that a "proper" Service Locator could be abstract enough to allow you to switch the underlying container.
That said, use of a Service Locator has in my experience, made the code difficult to work with. Your Service Locator must be passed around everywhere, and then your code is tightly coupled to the very existence of a Service Locator.
When you use a Service Locator, you have two main options to maintain modules in a "decoupled" (loosely used here..) way.
Option 1:
Pass your locator into everything that requires it. Essentially this means your code becomes lots and lots of this sort of code:
var locator = _locator;
var customerService = locator.Get<ICustomerService>();
var orders = customerService.GetOrders(locator, customerId); // locator again
// .. further down..
var repo = locator.Get<ICustomerRepository>();
var orderRepo = locator.Get<IOrderRepository>();
// ...etc...
Option 2:
Smash all of your code into a single assembly and provide a public static Service Locator somewhere. This is even worse.. and ends up being the same as above (just with direct calls to the Service Locator).
Ninject is lucky (by lucky I mean - has a great maintainer/extender in Remo) in that it has a heap of extensions that allow you to fully utilise Inversion of Control in almost all parts of your system, eliminating the sort of code I showed above.

This is a bit against SO's policy but to extend on Simon`s answer i'll direct you to Mark Seeman's excellent blog post: http://blog.ploeh.dk/2010/02/03/ServiceLocatorisanAnti-Pattern/
Some of the comments to the blog post are very interesting, too.
Now to address your problem with ninject and extensions - which i assume are unknown to you/the composition root at the time when you write them - i would like to point out the NinjectModules. Ninject already features such an extensibility mechanism which is heavily used with/by all ninject.extension.XYZ dlls.
What you'll do is implement some FooExtensionModule : NinjectModule classes in your extensions. The Modules contain the Bind methods. Now you'll tell ninject to load all Modules from some .dll's. That's it.
It's explained in far more greater detail here: https://github.com/ninject/ninject/wiki/Modules-and-the-Kernel
Drawbacks:
Extensions depend on Ninject
you may need to recompile your extensions when you "update ninject"
as the software grows, it will make it more and more expensive to switch DI containers
When using Rebind issues may arise which are difficult to track down (well this is the case whether your using Modules or not.)
Especially when extension-developers don't know about other extensions, they might create identical or conflicting bindings (such as .Bind<string>().ToConst("Foo") and .Bind<string>().ToConst("Bar")). Again, this is also the case when you're not using Modules, but extensions add one more layer of complexity.
Advantage:
- simple and to the point, there's no extra layer of complication/abstraction which you'd need to abstract the container away.
I've used the NinjectModule approach in a not-so-small Application (15k unit/component tests) with a lot of success.
If all you need are simple bindings like .Bind<IFoo>().To<Foo>() without scopes and so on, you might also consider using a simpler system like putting attributes on classes, scanning for these, and creating the bindings in the composition root. This approach is way less powerful but because of that it is much more "portable" (adaptable to be used with other DI container), too.
Dependency Injection an late instantiation
the idea of composition root is, that (whenever possible) you create all objects (the entire object graph) in one go. For example, in the Main method you might have kernel.Get<IMainViewModel>().Show().
However, sometimes this is not feasible or appropriate. In such cases you will need to use factories. There's actually a bunch of answers to this regard on stackoverflow already.
There's three basic types:
To create an an instance of Foo which requires instances of Dependency1 and Dependency2 (ctor injected), create a class FooFactory which gets one instance of Dependency1 and Dependency2 ctor injected itself. The FooFactory.Create method will then do new Foo(this.dependency1, this.dependency2).
use ninject.extensions.Factory:
use Func<Foo> as a factory: you can have a Func<Foo> injected and then call it to crate an instance of Foo.
use interfaces and .ToFactory() binding (i recommend this approach. Cleaner code, better testability). For example: IFooFactory with method Foo Create(). Bind it like: Bind<IFooFactory>().ToFactory();
Extensions which replace Implementations
IMHO this is not one of the goals of dependency-injection containers. That doesn't mean it's impossible, but it just means you've got to figure it out yourself.
The simplest you could to with ninject would be to use .Rebind<IFoo>().To<SomeExtensionsFoo>(). However, as stated before, that's a bit brittle. If the Bind and Rebind are executed in the wrong sequence, it fails. If there's multiple Rebinds, the last will win - but is it the correct one?
So let's take it one step further. Imagine:
`.Bind<IFoo>().To<SomeExtensionsFoo>().WhenExtensionEnabled<SomeExtension>();`
you can devise your own custom WhenExtensionEnabled<TExtension>() extension method which extends the When(Func<bool> condition) syntax method.
You'll have to devise a way to detect whether an extension is enabled or not.

Related

What makes a ServiceLocator be an anti-pattern? [duplicate]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed last month.
The community reviewed whether to reopen this question last month and left it closed:
Original close reason(s) were not resolved
Improve this question
Recently I've read Mark Seemann's article about Service Locator anti-pattern.
Author points out two main reasons why ServiceLocator is an anti-pattern:
API usage issue (which I'm perfectly fine with)
When class employs a Service locator it is very hard to see its dependencies as, in most cases, class has only one PARAMETERLESS constructor.
In contrast with ServiceLocator, DI approach explicitly exposes dependencies via constructor's parameters so dependencies are easily seen in IntelliSense.
Maintenance issue (which puzzles me)
Consider the following expample
We have a class 'MyType' which employs a Service locator approach:
public class MyType
{
public void MyMethod()
{
var dep1 = Locator.Resolve<IDep1>();
dep1.DoSomething();
}
}
Now we want to add another dependency to class 'MyType'
public class MyType
{
public void MyMethod()
{
var dep1 = Locator.Resolve<IDep1>();
dep1.DoSomething();
// new dependency
var dep2 = Locator.Resolve<IDep2>();
dep2.DoSomething();
}
}
And here is where my misunderstanding starts. The author says:
It becomes a lot harder to tell whether you are introducing a breaking change or not. You need to understand the entire application in which the Service Locator is being used, and the compiler is not going to help you.
But wait a second, if we were using DI approach, we would introduce a dependency with another parameter in constructor (in case of constructor injection). And the problem will be still there. If we may forget to setup ServiceLocator, then we may forget to add a new mapping in our IoC container and DI approach would have the same run-time problem.
Also, author mentioned about unit test difficulties. But, won't we have issues with DI approach? Won't we need to update all tests which were instantiating that class? We will update them to pass a new mocked dependency just to make our test compilable. And I don't see any benefits from that update and time spending.
I'm not trying to defend Service Locator approach. But this misunderstanding makes me think that I'm losing something very important. Could somebody dispel my doubts?
UPDATE (SUMMARY):
The answer for my question "Is Service Locator an anti-pattern" really depends upon the circumstances. And I definitely wouldn't suggest to cross it out from your tool list. It might become very handy when you start dealing with legacy code. If you're lucky enough to be at the very beginning of your project then the DI approach might be a better choice as it has some advantages over Service Locator.
And here are main differences which convinced me to not use Service Locator for my new projects:
Most obvious and important: Service Locator hides class dependencies
If you are utilizing some IoC container it will likely scan all constructor at startup to validate all dependencies and give you immediate feedback on missing mappings (or wrong configuration); this is not possible if you're using your IoC container as Service Locator
For details read excellent answers which are given below.
If you define patterns as anti-patterns just because there are some situations where it does not fit, then YES it's an anti pattern. But with that reasoning all patterns would also be anti patterns.
Instead we have to look if there are valid usages of the patterns, and for Service Locator there are several use cases. But let's start by looking at the examples that you have given.
public class MyType
{
public void MyMethod()
{
var dep1 = Locator.Resolve<IDep1>();
dep1.DoSomething();
// new dependency
var dep2 = Locator.Resolve<IDep2>();
dep2.DoSomething();
}
}
The maintenance nightmare with that class is that the dependencies are hidden. If you create and use that class:
var myType = new MyType();
myType.MyMethod();
You do not understand that it has dependencies if they are hidden using service location. Now, if we instead use dependency injection:
public class MyType
{
public MyType(IDep1 dep1, IDep2 dep2)
{
}
public void MyMethod()
{
dep1.DoSomething();
// new dependency
dep2.DoSomething();
}
}
You can directly spot the dependencies and cannot use the classes before satisfying them.
In a typical line of business application you should avoid the use of service location for that very reason. It should be the pattern to use when there are no other options.
Is the pattern an anti-pattern?
No.
For instance, inversion of control containers would not work without service location. It's how they resolve the services internally.
But a much better example is ASP.NET MVC and WebApi. What do you think makes the dependency injection possible in the controllers? That's right -- service location.
Your questions
But wait a second, if we were using DI approach, we would introduce a
dependency with another parameter in constructor (in case of
constructor injection). And the problem will be still there.
There are two more serious problems:
With service location you are also adding another dependency: The service locator.
How do you tell which lifetime the dependencies should have, and how/when they should get cleaned up?
With constructor injection using a container you get that for free.
If we may
forget to setup ServiceLocator, then we may forget to add a new
mapping in our IoC container and DI approach would have the same
run-time problem.
That's true. But with constructor injection you do not have to scan the entire class to figure out which dependencies are missing.
And some better containers also validate all dependencies at startup (by scanning all constructors). So with those containers you get the runtime error directly, and not at some later temporal point.
Also, author mentioned about unit test difficulties. But, won't we have issues with DI approach?
No. As you do not have a dependency to a static service locator. Have you tried to get parallel tests working with static dependencies? It's not fun.
I would also like to point out that IF you are refactoring legacy code that the Service Locator pattern is not only not an anti-pattern, but it is also a practical necessity. No-one is ever going to wave a magic wand over millions of lines of code and suddenly all that code is going to be DI ready. So if you want to start introducing DI to an existing code base it is often the case that you will change things to become DI services slowly, and the code that references these services will often NOT be DI services. Hence THOSE services will need to use the Service Locator in order to get instances of those services that HAVE been converted to use DI.
So when refactoring large legacy applications to start to use DI concepts I would say that not only is Service Locator NOT an anti-pattern, but that it is the only way to gradually apply DI concepts to the code base.
From the testing point of view, Service Locator is bad. See Misko Hevery's Google Tech Talk nice explanation with code examples http://youtu.be/RlfLCWKxHJ0 starting at minute 8:45. I liked his analogy: if you need $25, ask directly for money rather than giving your wallet from where money will be taken. He also compares Service Locator with a haystack that has the needle you need and knows how to retrieve it. Classes using Service Locator are hard to reuse because of it.
Maintenance issue (which puzzles me)
There are 2 different reasons why using service locator is bad in this regard.
In your example, you are hard-coding a static reference to the service locator into your class. This tightly couples your class directly to the service locator, which in turns means it won't function without the service locator. Furthermore, your unit tests (and anybody else who uses the class) are also implicitly dependent on the service locator. One thing that has seemed to go unnoticed here is that when using constructor injection you don't need a DI container when unit testing, which simplifies your unit tests (and developers' ability to understand them) considerably. That is the realized unit testing benefit you get from using constructor injection.
As for why constructor Intellisense is important, people here seem to have missed the point entirely. A class is written once, but it may be used in several applications (that is, several DI configurations). Over time, it pays dividends if you can look at the constructor definition to understand a class's dependencies, rather than looking at the (hopefully up-to-date) documentation or, failing that, going back to the original source code (which might not be handy) to determine what a class's dependencies are. The class with the service locator is generally easier to write, but you more than pay the cost of this convenience in ongoing maintenance of the project.
Plain and simple: A class with a service locator in it is more difficult to reuse than one that accepts its dependencies through its constructor.
Consider the case where you need to use a service from LibraryA that its author decided would use ServiceLocatorA and a service from LibraryB whose author decided would use ServiceLocatorB. We have no choice other than using 2 different service locators in our project. How many dependencies need to be configured is a guessing game if we don't have good documentation, source code, or the author on speed dial. Failing these options, we might need to use a decompiler just to figure out what the dependencies are. We may need to configure 2 entirely different service locator APIs, and depending on the design, it may not be possible to simply wrap your existing DI container. It may not be possible at all to share one instance of a dependency between the two libraries. The complexity of the project could even be further compounded if the service locators don't happen to actually reside in the same libraries as the services we need - we are implicitly dragging additional library references into our project.
Now consider the same two services made with constructor injection. Add a reference to LibraryA. Add a reference to LibraryB. Provide the dependencies in your DI configuration (by analyzing what is needed via Intellisense). Done.
Mark Seemann has a StackOverflow answer that clearly illustrates this benefit in graphical form, which not only applies when using a service locator from another library, but also when using foreign defaults in services.
My knowledge is not good enough to judge this, but in general, I think if something has a use in a particular situation, it does not necessarily mean it cannot be an anti-pattern. Especially, when you are dealing with 3rd party libraries, you don't have full control on all aspects and you may end up using the not very best solution.
Here is a paragraph from Adaptive Code Via C#:
"Unfortunately, the service locator is sometimes an unavoidable anti-pattern. In some application types— particularly Windows Workflow Foundation— the infrastructure does not lend itself to constructor injection. In these cases, the only alternative is to use a service locator. This is better than not injecting dependencies at all. For all my vitriol against the (anti-) pattern, it is infinitely better than manually constructing dependencies. After all, it still enables those all-important extension points provided by interfaces that allow decorators, adapters, and similar benefits."
-- Hall, Gary McLean. Adaptive Code via C#: Agile coding with design patterns and SOLID principles (Developer Reference) (p. 309). Pearson Education.
I can suggest considering generic approach to avoid the demerits of Service Locator pattern.
It allows explicitly declaring class dependencies and substitute mocks and doesn't depend on a particular DI Container.
Possible drawbacks of this approach is:
It makes your control classes generic.
It is not easy to override some particular interface.
1 First Declare interface
public interface IResolver<T>
{
T Resolve();
}
Create ‘flattened’ class with implementing of resolving the most of frequently used interfaces from DI Container and register it.
This short example uses Service Locator but before composition root. Alternative way is inject each interface with constructor.
public class FlattenedServices :
IResolver<I1>,
IResolver<I2>,
IResolver<I3>
{
private readonly DIContainer diContainer;
public FlattenedServices(DIContainer diContainer)
{
this.diContainer = diContainer;
}
I1 IResolver<I1>.Resolve()
=> diContainer.Resolve<I1>();
I2 IResolver<I2>.Resolve()
=> diContainer.Resolve<I2>();
I3 IResolver<I3>.Resolve()
=> diContainer.Resolve<I3>();
}
Constructor injection on some MyType class should look like
public class MyType<T> : IResolver<T>
where T : class, IResolver<I1>, IResolver<I3>
{
T servicesContext;
public MyType(T servicesContext)
{
this.servicesContext = servicesContext
?? throw new ArgumentNullException(nameof(serviceContext));
_ = (servicesContext as IResolver<I1>).Resolve() ?? throw new ArgumentNullException(nameof(I1));
_ = (servicesContext as IResolver<I3>).Resolve() ?? throw new ArgumentNullException(nameof(I3));
}
public void MyMethod()
{
var dep1 = ((IResolver<I1>)servicesContext).Resolve();
dep1.DoSomething();
var dep3 = ((IResolver<I3>)servicesContext).Resolve();
dep3.DoSomething();
}
T IResolver<T>.Resolve() => serviceContext;
}
P.S. If you don't need to pass servicesContext further in MyType, you can declare object servicesContext; and make generic only ctor not class.
P.P.S. That FlattenedServices class could be considered as main DI container and a branded container could be considered as supplementary container.
The Author reasons that "the compiler won't help you" - and it is true.
When you deign a class, you will want to carefully choose its interface - among other goals to make it as independent as ... as it makes sense.
By having the client accept the reference to a service (to a dependency) through explicit interface, you
implicitly get checking, so the compiler "helps".
You're also removing the need for the client to know something about the "Locator" or similar mechanisms, so the client is actually more independent.
You're right that DI has its issues / disadvantages, but the mentioned advantages outweigh them by far ... IMO.
You're right, that with DI there is a dependency introduced in the interface (constructor) - but this is hopefully the very dependency that you need and that you want to make visible and checkable.
Yes, service locator is an anti-pattern it violates encapsulation and solid.
Service Locator(SL)
Service Locator solves [DIP + DI] issue. It allows by interface name satisfy needs. Service Locator can be as singleton or can be passed into constructor.
class A {
IB ib
init() {
ib = ServiceLocator.resolve<IB>();
}
}
The problem here that it is not clear which exactly classes(realizations of IB) are used by client(A).
proposal - pass parameter explicitly
class A {
IB ib
init(ib: IB) {
self.ib = ib
}
}
SL vs DI IoC Container(framework)
SL is about saving instances when DI IoC Container(framework) is more about creating instances.
SL works as a PULL command when it retrieves dependencies inside constructor
DI IoC Container(framework) works as a PUSH command when it put dependencies into constructor
I think that the author of the article shot himself on the foot in proving that it's an anti pattern with the update written 5 years later. There, it is said that this is the correct way:
public OrderProcessor(IOrderValidator validator, IOrderShipper shipper)
{
if (validator == null)
throw new ArgumentNullException("validator");
if (shipper == null)
throw new ArgumentNullException("shipper");
this.validator = validator;
this.shipper = shipper;
}
Then below is said:
Now it's clear that all three object are required before you can call the Process method; this version of the OrderProcessor class advertises its preconditions via the type system. You can't even compile client code unless you pass arguments to constructor and method (you can pass null, but that's another discussion).
Let me stress the last part again:
you can pass null, but that's another discussion
Why that is another discussion? That is a huge deal. An object that receives its dependencies as arguments fully depends on the previous execution of the application (or tests) to provide those objects as valid references/pointers. It is not "encapsulated" in the terms that the author expressed, as it depends on a lot of external machinery to run in a satisfactory manner for the object to be constructed at all, and later to work properly when it needs to use the other class.
The author claims that it's the Service Locator which is not encapsulated because it's depending on an additional object that you can't isolate from on the tests. But that other object could very well be a trivial map or vector, so it's pure data with no behavior. In C++, for example, containers are not part of the language, so you rely on containers (vectors, hash maps, strings, etc.) for all non-trivial classes. Are they not isolated because rely on containers? I don't think so.
I think that both using manual dependency injection or a service locator the objects are not really isolated from the rest: they need their dependencies, yes or yes, but are provided in a different way. I for one think that a locator even helps with the DRY principle, as it's error prone and repetitive to pass over and over pointers through the application. The service locator can also be more flexible in that an object might retrieve its dependencies when needed (if needed), not only via the constructor.
The problem of the dependencies not being explicit enough via the constructor of the object (on an object using the service locator) is solved by the very thing I stressed out before: passing null pointers. It can be even used to mix and match both systems: if the pointer is null, use the service locator, otherwise, use the pointer. Now it is enforced via the type system, and it's obvious to the user of the class. But we can do even better.
One additional solution that is surely doable with C++ (I don't know about Java/C#, but I suppose it could be done as well), is to write a helper class to be instantiated like LocatorChecker<IOrderValidator, IOrderShipper>. That object could check on it's constructor/destructor that the service locator holds a valid instance of the required classes, hence, being also less repetitive than the example provided by Mark Seeman.

IoC/Dependency Injection - Best Practies for n-tier applications

Once again I come to you, to ask some Best practices questions.
I am starting a new project, and I want to be able to test it properly, and therefor I turn to IoC and Dependency Injection. I already know a fair amount about the concept, but there are some small details I want to ask you about.
I will be using a 3 tier application architecture
ASP.NET -> BLL -> DAL
First Question
My first question is how do I best go about resolving dependencies. Having them injected into the constructor, seems to me that in some cases, I will have biiig constructors with alot of dependencies, even though in the actual code path I will need only a few of them.
Also my concern is instantiating all dependencies that I might need, but properly wont, or maybe need to be able to instantiate something lower(I know you should propably get it from the dependency resolver)?
So question: How do you go about injecting multiple dependencies without wasting resources, or making the init slow
Second question
The Dependencies will be given to alot of Controllers in my MVC application, and here it is smart to use the dependencies indirectly, same propably goes for the BLL. But how about deep in the DAL? Say I need to use the CustomerDataProvider and the OrderDataProvider in addition to the some other dataproviders. Do I instantiate it directly, or use dependency injection here again?
Third question
Last question, how do I manage to incorporate sharding into all of this? All my clients will be run on the same WebServer, but they each have their own DB (sharding), and I will also have a Master DB, how do you accommodate for this in NInject?
I would get a copy of Mark Seemann's book on DI. He has a section regarding what, and what not, to inject. One of his recommendations is to only inject 'volatile' depdendencies. This will shorten the parameter list of your constructors. His blog may cover this subject, but I'm not sure.

What good is Unity DI in MVC?

I'm slightly new to Unity and IoC, but not to MVC. I've been reading and reading about using Unity with MVC and the only really useful thing I'm consistently seeing is the ability to get free DI with the controllers.
To go from this:
public HomeController() : this(new UserRepository())
{
}
public HomeController(IUserRepository userRepository)
{
this.UserRepository = userRepository;
}
To this:
public HomeController(IUserRepository userRepository)
{
this.UserRepository = userRepository;
}
Basically, allowing me to drop the no parameter constructor. This is great and all and I'm going to implement this for sure, but it doesn't seem like it's anything really that great for all the hype about IoC libraries. Going the way of using Unity as a service locator sounds compelling, but many would argue it's an anti pattern.
So my question is, with service locating out of the question and some DI opportunities with Views and Filters, is there anything else I gain from using Unity? I just want to make sure I'm not missing something wonderful like free DI support for all class constructors.
EDIT:
I understand the testability purpose behind using Unity DI with MVC controllers. But all I would have to do is add that one extra little constructor, nix Unity, and I could UnitTest just the same. Where is the great benefit in registering your repository types and having a custom controller factory when the alternative is simpler? The alternative being native DI. I guess I'm really wondering what is so great about Unity (or any IoC library) besides Service Locating which is bad. Is free Controller DI really the ONLY thing I get from Unity?
A good IoC container not only creates the concrete class for you, it examines the couplings between that type and other types. If there are additional dependencies, it resolves them and creates instances of all of the classes that are required.
You can do fancy things like conditional binding. Here's an example using Ninject (my preferred IoC):
ninjectKernel.Bind<IValueCalculator>().To<LinqValueCalculator>();
ninjectKernel.Bind<IValueCalculator>().To<IterativeValueCalculator().WhenInjectedInto<LimitShoppingCart>();
What ninject is doing here is creating an instance of IterativeValueCalculator when injecting into LimitShoppingCart and an instance of LinqValueCalulator for any other injection.
Greatest benefit is separation of concern (decoupling) and testability.
Regarding why Service Locator is considered bad(by some guys) you can read this blog-post by Mark Seeman.
Answering on your question What is so good in Unity I can say that apart from all the testability, loosely-coupling and other blah-blah-blah-s everyone is talking about you can use such awesome feature like Unity's Interception which allows to do some AOP-like things. I've used it in some of last projects and liked it pretty much. Strongly recommended!
p.s. Seems like Castle Windsor DI container has similar feature as well(called Interceptors). Other containers - not sure.
Besides testing (which is a huge benefit and should not be under estimated), dependency injection allows:
Maintainability: The ability to alter the behavior of your code with a single change.
If you decide to change the class that retrieves your users across all your controllers/services etc. without dependency injection, you need to update each and every constructor plus any other random new instances that are being created, provided you remember where each one lives. DI allows you to change one definition that is then used across all implementations.
Scope: With a single line of code you can alter your implementation to create a singleton, a class that is only created on each new web request or on each new thread
Readability: The use of dependency injection means that all your concrete classes are defined in one place. As a developer coming onto a project, I can quickly and easily see exactly which concrete classes are mapped to which interfaces and know that there are no hidden implemetations. It means I can not only read the code better but empowers me to have the confidence to develop against the code
Design: I believe using dependency injection helps create well designed code. You automatically code to interfaces, your code becomes cleaner because you haven't got strange blocks of code to help you test
And let's no forget...
Testing: Testing is huge! Dependency injection allows you to test your code without having to write code specifically for tests. Ok, you can create a new constructor, but what is stop anyone else using that constructor for a purpose it has not been intended for. What if another developer comes along six months later and adds production logic to your 'test' constructor. Ok, so you can make it internal but this can still be used by production code. Why give them the option.
My experience with IoC frameworks has been largely around Ninject. As such, the above is based on what I know of Ninject, however the same principles should remain the same across other frameworks.
No the main point is that you then have a mapping somewhere that specifies the concrete type of IUserRepository. And the reason that you might like that is that you can then create a UnitTest that specifies a mocked version of IUserRepository and execute your tests without changing anything in your controller.
Testability mostly, but id suggest looking at Ninject, it plays well with MVC. To get the full benefits of IOC you should really be combining this with Moq or a similar mocking framework.
Besides the testability, I think you can also add the extensibility as one of the advantages. One of the best practices of Software Development is "working with abstractions, not with implementations" and, while you can do it in several ways, Unity provides a very extensible and easy way to achieve this. By using this, you will be creating abstractions that will define the contracts that your component must comply. Say that you want to change totally the Repository that your application currently uses. With DI you just "swap" the component without the need of changing a single line of code on your application. If you are using hard references, you might need to change and recompile your application because of a component that is external to it (In a different layer)
So, bottom line, IMHO, using DI helps you to have pluggable components in your application and have a SOLID application design

What strategies enable use of an IoC container in a library? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 4 years ago.
Improve this question
Shorter version of the rest of the question: I've got a problem, and an IoC container is a tool. The problem sounds like something IoC might be able to solve, but I haven't read enough of the tool's instructions to know for sure. I'm curious if I picked the wrong tool or which chapter of the instruction manual I might be able to skip to for help.
I work on a library of Windows Forms controls. In the past year, I've stumbled into unit testing and become passionate about improving the quality of our automated tests. Testing controls is difficult, and there's not much information about it online. One of the nasty things is the separation of interaction logic from the UI glue that calls it leads to each control having several more dependencies than I would normally consider healthy for a class. Creating fakes for these elements when testing their integration with the control is quite tedious, and I'm looking into IoC for a solution.
There's one hurdle I'm not sure how to overcome. To set up the container, you need to have some bootstrapper code that runs before the rest of the application. In an application there is a very clear place for this stuff. In a library it's not so clear.
The first solution that comes to mind is creating a class that provides a static instance of the container and sets up the container in its type initializer. This would work for runtime, but in the test environment I'm not sure how well it would work. Tests are allowed to run in parallel and many tests will require different dependencies, so the static shared state will be a nightmare. This leads me to believe the container creation should be an instance method, but then I have a chicken and egg problem as a control would have to construct its container before it can create itself. The control's type initializer comes to mind, but my tests won't be able to modify this behavior. This led me to think of making the container itself a dependency of the control where user-visible constructors provide the runtime default implementation, but this leaves it up to my tests to set up their own containers. I haven't given this much thought, but it seems like this would be on the same level of effort as what I have now: tests that have to initialize 3-5 dependencies per test.
Normally I'd try a lot of things on my own to see what hurts. I'm under some harsh deadlines at the moment so I don't have much time to experiment as I write code; I only get brief moments to think about this and haven't put much to paper. I'm sure someone else has had a similar problem, so it'd be nice if I didn't have to reinvent the wheel.
Has anyone else attacked this problem? Are there some examples of strategies that will address these needs? Am I just a newbie and overcomplicating things due to my inexperience? If it's the latter, I'd love any resources you want to share for solving my ignorance.
Update:
I'd like to respond to Mark Seeman's answer, but it will require more characters than the comment field allows.
I'm already toying with presentation model patterns. The view in this case is the public control class and each has one or more controller classes. When some UI event is triggered on the control, the only logic it performs is deciding which controller methods need to be called.
The short expression of an exploration of this design is my controller classes are tightly coupled to their views. Based on the statement that DI containers work with loosely coupled code I'm reading "wrong tool for the job". I might be able to design a more loosely coupled architecture, at which point a DI container may be easier to use. But that's going to require some significant effort and it'd be an overhaul of shipped code; I'll have to experiment with new stuff before tiptoeing around the older stuff. That's a question for another day.
Why do I even want to inject strongly coupled types rather than using local defaults? Some of the seams are intended to be extensibility points for advanced users. I have to test various scenarios involving incorrect implementations and also verify I meet my contracts; mock objects are a great fit.
For the current design, Chris Ballard's suggestion of "poor man's DI" is what I've more or less been following and for my strongly coupled types it's just a lot of tedious setup. I had this vision that I'd be able to push all of that tedium into some DI container setup method, but the more I try to justify that approach the more convinced I become that I'm trying to hang pictures with a sledgehammer.
I'll wait 24 hours or so to see if discussion progresses further before accepting.
Dependency Injection (Inversion of Control) is a set of principles and patterns that you can use to compose loosely coupled code. It's a prerequisite that the code is loosely coupled. A DI Container isn't going to make your code loosely coupled.
You'll need to find a way to decouple your UI rendering from UI logic. There are lots of Presentation Patterns that describe how to do that: Model View Controller, Model View Presenter, Presentation Model, etc.
Once you have good decoupling, DI (and containers) can be used to compose collaborators.
Since the library should not have any dependency on the ioc framework itself we included spring.net ioc config xml-files with the standard configuration for that lib. That was modular in theory because every dll had its own sub config. All sub-configs were then assembled to become a part of the main confing.
But in reality this aproach was error prone and too interdependent: one lib config had to know about properties of others to avoid duplicates.
My conclusion: either use #Chris Ballard "poor man's dependency injection" or handle all dependencies in on big ugly tightly coupled config-module of the main app.
Depending on how complex your framework is, you may get away with handcoding constructor based dependency injection, with a parameterless default constructor (which uses the normal concrete implementation) but with a second constructor for injecting the dependency for unit test purposes, eg.
private IMyDependency dependency;
public MyClass(IMyDependency dependency)
{
this.dependency = dependency;
}
public MyClass() : this(new MyDefaultImplementation())
{
}

Can anyone explain to me, at length, how to use IOC containers?

I use dependency injection through parameters and constructors extensively. I understand the principle to this degree and am happy with it. On my large projects, I end up with too many dependencies being injected (anything hitting double figures feels to big - I like the term 'macaroni code').
As such, I have been considering IOC containers. I have read a few articles on them and so far I have failed to see the benefit. I can see how it assists in sending groups of related objects or in getting the same type over and over again. I'm not sure how they would help me in my projects where I may have over a hundred classes implementing the same interface, and where I use all of them in varying orders.
So, can anybody point me at some good articles that not only describe the concepts of IOC containers (preferably without hyping one in particular), but also show in detail how they benefit me in this type of project and how they fit into the scope of a large architecture?
I would hope to see some non-language specific stuff but my preferred language if necessary is C#.
Inversion of Control is primarily about dependency management and providing testable code. From a classic approach, if a class has a dependency, the natural tendency is to give the class that has the dependency direct control over managing its dependencies. This usually means the class that has the dependency will 'new' up its dependencies within a constructor or on demand in its methods.
Inversion of Control is just that...it inverts what creates dependencies, externalizing that process and injecting them into the class that has the dependency. Usually, the entity that creates the dependencies is what we call an IoC container, which is responsible for not only creating and injecting dependencies, but also managing their lifetimes, determining their lifestyle (more on this in a sec), and also offering a variety of other capabilities. (This is based on Castle MicroKernel/Windsor, which is my IoC container of choice...its solidly written, very functional, and extensible. Other IoC containers exist that are simpler if you have simpler needs, like Ninject, Microsoft Unity, and Spring.NET.)
Consider that you have an internal application that can be used either in a local context or a remote context. Depending on some detectable factors, your application may need to load up "local" implementations of your services, and in other cases it may need to load up "remote" implementations of your services. If you follow the classic approach, and create your dependencies directly within the class that has those dependencies, then that class will be forced to break two very important rules about software development: Separation of Concerns and Single Responsibility. You cross boundaries of concern because your class is now concerned about both its intrinsic purpose, as well as the concern of determining which dependencies it should create and how. The class is also now responsible for many things, rather than a single thing, and has many reasons to change: its intrinsic purpose changes, the creation process for its dependencies changes, the way it finds remote dependencies changes, what dependencies its dependencies may need, etc.
By inverting your dependency management, you can improve your system architecture and maintain SoC and SR (or, possibly, achieve it when you were previously unable to due to dependencies.) Since an external entity, the IoC container, now controls how your dependencies are created and injected, you can also gain additional capabilities. The container can manage the life cycles of your dependencies, creating and destroying them in more flexible ways that can improve efficiency. You also gain the ability to manage the life styles of your objects. If you have a type of dependency that is created, used, and returned on a very frequent basis, but which have little or no state (say, factories), you can give them a pooled lifestyle, which will tell the container to automatically create an object pool for that particular dependency type. Many lifestyles exist, and a container like Castle Windsor will usually give you the ability to create your own.
The better IoC containers, like Castle Windsor, also provide a lot of extendability. By default, Windsor allows you to create instances of local types. Its possible to create Facilities that extend Windsor's type creation capabilities to dynamically create web service proxies and WCF service hosts on the fly, at runtime, eliminating the need to create them manually or statically with tools like svcutil (this is something I did myself just recently.) Many facilities exist to bring IoC support existing frameworks, like NHibernate, ActiveRecord, etc.
Finally, IoC enforces a style of coding that ensures unit testable code. One of the key factors in making code unit testable is externalizing dependency management. Without the ability to provide alternative (mocked, stubbed, etc.) dependencies, testing a single "unit" of code in isolation is a very difficult task, leaving integration testing the only alternative style of automated testing. Since IoC requires that your classes accept dependencies via injection (by constructor, property, or method), each class is usually, if not always, reduced to a single responsibility of properly separated concern, and fully mockable dependencies.
IoC = better architecture, greater cohesion, improved separation of concerns, classes that are easier to reduce to a single responsibility, easily configurable and interchangeable dependencies (often without requiring a recompilation of your code), flexible dependency life styles and life time management, and unit testable code. IoC is kind of a lifestyle...a philosophy, an approach to solving common problems and meeting critical best practices like SoC and SR.
Even (or rather, particularly) with hundreds of different implementations of a single interface, IoC has a lot to offer. It might take a while to get your head fully wrapped around it, but once you fully understand what IoC is and what it can do for you, you'll never want to do things any other way (except perhaps embedded systems development...)
If you have over a hundred of classes implementing a common interface, an IoC won't help very much, you need a factory.
That way, you may do the following:
public interface IMyInterface{
//...
}
public class Factory{
public static IMyInterface GetObject(string param){
// param is a parameter that will help the Factory decide what object to return
// (that is only an example, there may not be any parameter at all)
}
}
//...
// You do not depend on a particular implementation here
IMyInterface obj = Factory.GetObject("some param");
Inside the factory, you may use an IoC Container to retrieve the objects if you like, but you'll have to register each one of the classes that implement the given interface and associate them to some keys (and use those keys as parameters in GetObject() method).
An IoC is particularly useful when you have to retrieve objects that implement different interfaces:
IMyInteface myObject = Container.GetObject<IMyInterface>();
IMyOtherInterface myOtherObject Container.GetObject<IMyOtherInterface>();
ISomeOtherInterface someOtherObject = Container.GetObject<ISomeOtherInterface>();
See? Only one object to get several different type objects and no keys (the intefaces themselves are the keys). If you need an object to get several different object, but all implementing the same interface, an IoC won't help you very much.
In the past few weeks, I've taken the plunge from dependency-injection only to full-on inversion of control with Castle, so I understand where your question is coming from.
Some reasons why I wouldn't want to use an IOC container:
It's a small project that isn't going to grow that much. If there's a 1:1 relationship between constructors and calls to those constructors, using an IOC container isn't going to reduce the amount of code I have to write. You're not violating "don't repeat yourself" until you're finding yourself copying and pasting the exact same "var myObject = new MyClass(someInjectedDependency)" for a second time.
I may have to adapt existing code to facilitate being loaded into IOC containers. This probably isn't necessary until you get into some of the cooler Aspect-oriented programming features, but if you've forgotten to make a method virtual, sealed off that method's class, and it doesn't implement an interface, and you're uncomfortable making those changes because of existing dependencies, then making the switch isn't quite as appealing.
It adds an additional external dependency to my project -- and to my team. I can convince the rest of my team that structuring their code to allow DI is swell, but I'm currently the only one that knows how to work with Castle. On smaller, less complicated projects, this isn't going to be an issue. For the larger projects (that, ironically, would reap the most benefit from IOC containers), if I can't evangelize using an IOC container well enough, going maverick on my team isn't going to help anybody.
Some of the reasons why I wouldn't want to go back to plain DI:
I can add or take away logging to any number of my classes, without adding any sort of trace or logging statement. Having the ability for my classes to become interwoven with additional functionality without changing those classes, is extremely powerful. For example:
Logging: http://ayende.com/Blog/archive/2008/07/31/Logging--the-AOP-way.aspx
Transactions: http://www.codeproject.com/KB/architecture/introducingcastle.aspx (skip down to the Transaction section)
Castle, at least, is so helpful when wiring up classes to dependencies, that it would be painful to go back.
For example, missing a dependency with Castle:
"Can't create component 'MyClass' as
it has dependencies to be satisfied.
Service is waiting for the following
dependencies:
Services:
- IMyService which was not registered."
Missing a dependency without Castle:
Object reference is not set to an
instance of an object
Dead Last: The ability to swap injected services at runtime, by editing an Xml File. My perception is that this is the most tauted feature, but I see it as merely icing on the cake. I'd rather wire up all my services in code, but I'm sure I'll run into a headache in the future where my mind will be changed on this.
I will admit that -- being a newbie to IOC and Castle -- I'm probably only scratching the surface, but so far, I genuinely like what I see. I feel like the last few projects I've built with it are genuinely capable of reacting to the unpredictable changes that arise from day to day at my company, a feeling I've never quite had before.
Try these:
http://www.martinfowler.com/articles/injection.html
http://msdn.microsoft.com/en-us/library/aa973811.aspx
I have no links but can provide you with an example:
You have a web controller that needs to call a service which has a data access layer.
Now, I take it in your code you are constructing these objects your self at compile time. You are using a decent design pattern, but if you ever need to change the implementation of say the dao, you have to go into you code and remove the code that sets this dependency up, recompile / test/ deploy. But if you were to use a IOC container you would just change the class in the configuration and restart the application.
Jeremy Frey misses one of the biggest reasons for using an IOC container: it makes your code easier to mock and test.
Encouraging the use of interfaces has lots of other nice benefits: better layering, easier to dynamically generate proxies for things like declarative transactions, aspect-oriented programming and remoting.
If you think IOC is only good for replacing calls to "new", you don't get it.
IoC containers usually do the dependency injections which in some projects are not a big deal , but some of the frameworks that provide IoC containers offer other services that make it worth to use them.
Castle for example has a complete list of services besides an IoC container.Dynamic proxies ,Transaction management and NHibernate facilities are some of them.
Then I think you should consider IoC contianers as a part of an application framework.
Here's why I use an IoC container:
1.Writing unit tests will be easier .Actually you write different configurations to do different things
2.Adding different plugins for different scenarios(for different customers for example)
3.Intercepting classes to add different aspects to our code.
4.Since we are using NHibernate ,Transaction management and NHibernate facilites of Castle are very helpful in developing and maintaining our code .
It's like every technical aspects of our application is handled using an application framework and we have time to think about what customers really want.

Categories

Resources