This question already has answers here:
Dependency Inject (DI) "friendly" library
(4 answers)
Closed 5 years ago.
I've been tasked with establishing code patterns to use for new applications that we'll be building. An early decision was made to build our applications and libraries using Prism & MEF with the intent of simplifying testing & reuse of functionality across applications.
As our code base has grown I've run into a problem.
Suppose we have some base library that needs access to some system - let's say a user management system:
public interface IUserManagementSystem
{
IUser AuthenticateUser(string username, string password);
}
// ...
public class SomeClass
{
[ImportingConstructor]
public SomeClass(IUserManagementSystem userSystem){ }
}
We can now create an object of type SomeClass using the ServiceLocator, but ONLY if an implementation of IUserManagementSystem has been registered. There is no way to know (at compile-time) whether creation will succeed or fail, what implementations are needed or other critical information.
This problem becomes even more complicated if we use the ServiceLocator in the library.
Finding the now-hidden dependencies has become a bigger problem than hard-coded dependencies were in legacy applications. The IoC and Dependency Injection patterns aren't new, how are we supposed to manage dependencies once we take that job away from the compiler? Should we avoid using ServiceLocator (or other IoC containers) in library code entirely?
EDIT: I'm specifically wondering how to handle cross-cutting concerns (such as logging, communication and configuration). I've seen some recommendations for building a CCC project (presumably making heavy use of singletons). In other cases (Is the dependency injection a cross-cutting concern?) the recommendation is to use the DI framework throughout the library leading back to the original problem of tracking dependencies.
Dependency Inject (DI) "friendly" library is somewhat relevant in that it explains how to structure code in a manner that can be used with or without a DI framework, but it doesn't address whether or not it makes sense to use DI within a library or how to determine the dependencies a given library requires. The marked answers there provide solid architectural advice about interacting with a DI framework but it doesn't address my question.
You should not be using ServiceLocator or any other DI-framework-specific pieces in your core library. That couples your code to the specific DI framework, and makes it so anyone consuming your library must also add that DI framework as a dependency for their product. Instead, use constructor injection and the various techniques mentioned by Mark Seeman in his excellent answer here.
Then, to help users of your library to get bootstrapped more easily, you can provide separate DI-Container-specific libraries with some basic utility classes that handle most of the standard DI bindings you're expecting people to use, so in many cases they can get started with a single line of code.
The problem of missing dependencies until run-time is a common one when using DI frameworks. The only real way around it is to use "poor-man's DI", where you have a class that actually defines how each type of object gets constructed, so you get a compile-time error if a dependency is introduced that you haven't dealt with.
However, you can often mitigate the problem by checking things as early in the runtime as possible. For example, SimpleInjector has a Validate() method that you can call after all the bindings have been set up, and it'll throw an exception if it can tell that it won't know how to construct dependencies for any of the types that have been registered to it. I also like adding a simple integration test to my testing suite that sets up all the bindings and then tries to construct all of the top-level types in my application (Web API Controllers, e.g.), and fails if it can't do so.
Related
Why is the reason the IDependencyResolver is coupled with the System.Web assembly (either Mvc or Http) in the .NET framework ?
The goal of a DI system isn't that it should provide an agnostic way of serving dependencies to a customer ? What if I want to use IDependencyResolver in a project that should not reference anything related to System.Web ?
Edit:
This is more a philosophical question than a request about how to do, because I know there are other alternatives such as open source DI library.
The goal of a DI system isn't that it should provide an agnostic way of serving dependencies to a customer ?
That is correct, but in this case, IDependencyResolver is specific to the library where it is defined. It is that library's DI abstraction to allow an agnostic extensibitliy point for dependency resolution. And I believe that that was the initial goal of the abstraction.
It was not really made to be reused independently by other libraries, which is evident in that there are two versions for both MVC and Web API. Though they share the same name and have the same purpose, their implementations differ slightly.
It also demonstrates the Conforming Container anti-pattern as mentioned in this article by Mark Seemann, where the article also mentions the above abstractions as known examples of Conforming Containers for .NET. Even my preferred approach of using IServiceProvider made the list.
What if I want to use IDependencyResolver in a project that should
not reference anything related to System.Web ?
My suggestion then would be to not use IDependencyResolver from System.Web. I would also add that above all, particular attention should be payed to following proper design patterns, making sure that one has an understanding of the concepts and where they should be applied or avoided.
The interface IDependencyResolver is an extension point of the System.Web-frameworks. The frameworks rely on this interface to resolve instances (of controllers and their like) and their dependencies.
The framework has its own implementation of the interface, but you can provide your own implementation of this interface. The Built-in implementation has a limited functionality (external configuration, injection types, interception).
Most IOC-Container and DI-Frameworks provide an implementation of this interface, so that you can integrate them into the existing framework.
Why is the reason the IDependencyResolver is coupled with the
System.Web assembly (either Mvc or Http) in the .NET framework ?
Because it is an interface 'they' use to resolve framework services. But yeah... they should, for the very least, have used IServiceProvider from System namespace.
The goal of a DI system isn't that it should provide an agnostic way of
serving dependencies to a customer ?
Nope. That is not the goal in that context. The main goal for framework author is to let you extend or even replace internal services framework is using.
In your code you should introduce your own facades over these 'standard' interfaces. They are very weak abstractions - good for base, but very far away from any richer resolving or lifetime management strategies.
What if I want to use IDependencyResolver in a project that should not
reference anything related to System.Web ?
You cannot (without adding System.Web reference) and you shouldn't. Use your own internal abstraction(Facade) over DI framework. Just like you shouldn't use NLog.ILogger directly in your classes, same applies to DI framework abstractions.
Some frameworks will make it close to or just impossible to do but you should use your own Facades wherever possible.
Same rules apply in broader sense as well.
Don't attach your project (unnecessarily) to some cloud service such as Azure. Other side might have much better prices some day. Limit dependencies and sticky parts as much as possible.
Edit:
This is more a philosophical question than a request about how to do,
because I know there are other alternatives such as open source DI library.
Oh... and same advice go with DI frameworks. Don't overuse DI framework features that could be easily implemented in different way over your Facades.
NOTE: Same goes with: CI pipelines, Service Bus/Message Queue frameworks.
I'm working on a project at the moment and it's going to be primarily library-based.
I want the library to be consumed using dependency injection, but I want the library to be largely agnostic towards the container being used.
I wrote a "bridge" library a while back to make this sort of thing easier, but I wasn't sure if this was actually the right approach? (library: https://github.com/clintkpearson/IoCBridge)
I don't want to reference the DI-technology (Ninject, Windsor etc) directly from my library as then it makes it inflexible for people using it.
There are a few other questions on SO in a similar vein but none of them seem to actually address the problem satisfactorily.
As a side note: I realise I could just make sure the library adheres to the general idiom and uses interfaces & ctor arguments for dependencies, and then just leave it up to the consuming app to register the types in the containers.
The only issue I can see with this (and correct me if I'm wrong) is that this requires the consuming app to actually know which types link to which interfaces, whether some need to be registered as singletons etc... and from a plug-and-play usage perspective that's pretty poor.
It's a bit controversial but I suggest using Poor Man's Injection. I'm not saying it's great but it has some valid use cases(just like Service Locator) under some constraints. It will require a little more maintenance but it will save you from depending another libray for IoC container registration. You should read Mark Seemann's article on the subject.
I recently implemented this approach in a very simple library of mine. Basically you write two constructors for the public classes of the library.
internal SitemapProvider(IActionResultFactory actionResultFactory, IBaseUrlProvider baseUrlProvider)
{
_actionResultFactory = actionResultFactory;
_baseUrlProvider = baseUrlProvider;
}
public SitemapProvider() : this(new ActionResultFactory(), new BaseUrlProvider()) { }
As you can see only the second constructor is public and you fill the dependencies yourself. This also provides encapsulation at assembly level. You can still test this class by adding a InternalsVisibleTo attribute to the assembly and use dependency injection in your library freely. The user can also create instances with new keyword or add this class's interface to their IoC registration.
I don't know if there's a widely adopted IoC container registration library in .NET. I thought about writing one myself but each container has their unique features and it gets more complicated with object life cycles. Also people will be uneasy about depending on another library for this.
A good DI implementation should enable DI on any object, regardless of the latter being DI-agnostic or not.
Prism is a bad example, as the last time I used it (2 years ago) it required objects to be DI-agnostic by enforcing use of the [Injection] attribute. A good non-DI-agnostic example is Spring Framework (extremely popular DI framework for Java, has a .NET port called Spring.NET), which allows enabling DI via so-called context files - these are xml files that describe dependencies. The latter need not be part of your library, leaving it as a completely independent dll file.
The example of Spring can tell you that you should not have any specific configuration, prerequisites or patterns to follow in order to make an object injectable, or allow objects to be injected to it, besides the programming to interfaces paradigm, and allowing programmatic access to suitable constructors and property setters.
This does not mean that any DI framework should support manipulation of plain CLR (.NET) objects, a.k.a. POCO-s. Some frameworks rely only on their specific mechanisms and may not be suitable to use with DI-independent code. Usually, they would require direct dependency on the DI framework to the library, which I think you want to (and probably should) avoid.
I think you have slightly misinterpeted the scope of Dependency Injection. DI is a pattern, a subset of IoC, and IoC containers make DI easy and convenient - they assist with dependency resolution. IoC can be categorised as a superset of several methodologies, of which DI is one part.
You do not need IoC frameworks in order to make Dependency Injection work.
If you really insist on using an IoC container instead of leveraging regular DI (i.e. constructor parameters or mandatory property setting) then you should nominate the container/framework, don't try to be all things to all people by trying to kludge together adapters or bridges. Be cautious about over-engineering. A library by its very definition means it has a limited and well defined set of functionality, therefore it should not need a large amount of dependencies injected.
They'll most likely want to implement their own versions of some of the interfaces
You don't need an IoC framework to achieve this. If your constructors have their parameters defined as interfaces then in effect you've already achieved DI - the dependency is injected at construction time and you know nothing about the actual concrete implementation of it. Let the calling code worry about the nitty gritty details of which implementation of that interface it wants to pass in.
I'm slightly new to Unity and IoC, but not to MVC. I've been reading and reading about using Unity with MVC and the only really useful thing I'm consistently seeing is the ability to get free DI with the controllers.
To go from this:
public HomeController() : this(new UserRepository())
{
}
public HomeController(IUserRepository userRepository)
{
this.UserRepository = userRepository;
}
To this:
public HomeController(IUserRepository userRepository)
{
this.UserRepository = userRepository;
}
Basically, allowing me to drop the no parameter constructor. This is great and all and I'm going to implement this for sure, but it doesn't seem like it's anything really that great for all the hype about IoC libraries. Going the way of using Unity as a service locator sounds compelling, but many would argue it's an anti pattern.
So my question is, with service locating out of the question and some DI opportunities with Views and Filters, is there anything else I gain from using Unity? I just want to make sure I'm not missing something wonderful like free DI support for all class constructors.
EDIT:
I understand the testability purpose behind using Unity DI with MVC controllers. But all I would have to do is add that one extra little constructor, nix Unity, and I could UnitTest just the same. Where is the great benefit in registering your repository types and having a custom controller factory when the alternative is simpler? The alternative being native DI. I guess I'm really wondering what is so great about Unity (or any IoC library) besides Service Locating which is bad. Is free Controller DI really the ONLY thing I get from Unity?
A good IoC container not only creates the concrete class for you, it examines the couplings between that type and other types. If there are additional dependencies, it resolves them and creates instances of all of the classes that are required.
You can do fancy things like conditional binding. Here's an example using Ninject (my preferred IoC):
ninjectKernel.Bind<IValueCalculator>().To<LinqValueCalculator>();
ninjectKernel.Bind<IValueCalculator>().To<IterativeValueCalculator().WhenInjectedInto<LimitShoppingCart>();
What ninject is doing here is creating an instance of IterativeValueCalculator when injecting into LimitShoppingCart and an instance of LinqValueCalulator for any other injection.
Greatest benefit is separation of concern (decoupling) and testability.
Regarding why Service Locator is considered bad(by some guys) you can read this blog-post by Mark Seeman.
Answering on your question What is so good in Unity I can say that apart from all the testability, loosely-coupling and other blah-blah-blah-s everyone is talking about you can use such awesome feature like Unity's Interception which allows to do some AOP-like things. I've used it in some of last projects and liked it pretty much. Strongly recommended!
p.s. Seems like Castle Windsor DI container has similar feature as well(called Interceptors). Other containers - not sure.
Besides testing (which is a huge benefit and should not be under estimated), dependency injection allows:
Maintainability: The ability to alter the behavior of your code with a single change.
If you decide to change the class that retrieves your users across all your controllers/services etc. without dependency injection, you need to update each and every constructor plus any other random new instances that are being created, provided you remember where each one lives. DI allows you to change one definition that is then used across all implementations.
Scope: With a single line of code you can alter your implementation to create a singleton, a class that is only created on each new web request or on each new thread
Readability: The use of dependency injection means that all your concrete classes are defined in one place. As a developer coming onto a project, I can quickly and easily see exactly which concrete classes are mapped to which interfaces and know that there are no hidden implemetations. It means I can not only read the code better but empowers me to have the confidence to develop against the code
Design: I believe using dependency injection helps create well designed code. You automatically code to interfaces, your code becomes cleaner because you haven't got strange blocks of code to help you test
And let's no forget...
Testing: Testing is huge! Dependency injection allows you to test your code without having to write code specifically for tests. Ok, you can create a new constructor, but what is stop anyone else using that constructor for a purpose it has not been intended for. What if another developer comes along six months later and adds production logic to your 'test' constructor. Ok, so you can make it internal but this can still be used by production code. Why give them the option.
My experience with IoC frameworks has been largely around Ninject. As such, the above is based on what I know of Ninject, however the same principles should remain the same across other frameworks.
No the main point is that you then have a mapping somewhere that specifies the concrete type of IUserRepository. And the reason that you might like that is that you can then create a UnitTest that specifies a mocked version of IUserRepository and execute your tests without changing anything in your controller.
Testability mostly, but id suggest looking at Ninject, it plays well with MVC. To get the full benefits of IOC you should really be combining this with Moq or a similar mocking framework.
Besides the testability, I think you can also add the extensibility as one of the advantages. One of the best practices of Software Development is "working with abstractions, not with implementations" and, while you can do it in several ways, Unity provides a very extensible and easy way to achieve this. By using this, you will be creating abstractions that will define the contracts that your component must comply. Say that you want to change totally the Repository that your application currently uses. With DI you just "swap" the component without the need of changing a single line of code on your application. If you are using hard references, you might need to change and recompile your application because of a component that is external to it (In a different layer)
So, bottom line, IMHO, using DI helps you to have pluggable components in your application and have a SOLID application design
I've been looking at the Common Service Locator as a way of abstracting my IoC container but I've been noticing that some people are strongly against this type of this.
Do people recommend never using it? Always using it? or sometimes using it?
If sometimes, then in what situations would you use it and what situations would you not use it.
Imagine you are writing library code to be used by 3rd party developers. Your code needs to be able to create service objects that these developers provide. However you don’t know which IoC container each of your callers will be using.
The Common Service Locator lets you cope with the above without forcing a given IoC on your users.
Within your library itself you may wish to register your own classes in the IoC, now it gets a lot harder as you need to choose a IoC for your own use that will not get in the way of your callers.
I noticed that one of the arguments against using the CSL is a false one, because developers think this library is only capable of doing the Service Locator pattern. This however isn't the case, because it is easy to use it with the Dependency Injection pattern as well.
However, the CSL library was specially designed for framework designers who need to allows users to register dependencies. Because the library will be calling the CSL directly, from the framework's perspective we're talking about the SL pattern, hence its name.
As a framework designer however, taking a dependency on the CSL shouldn't be taking lightly. For usability of your framework it is normally much better to have your own DI mechanism. A very common mechanism is to set up dependencies in the configuration file. This pattern is used throughout the whole .NET framework. Almost every dependency can be replaced for another. The .NET provider pattern is built on top of this.
When you, as a framework designer, take a dependency on the CSL, it will be harder for users to use your application. Users will have to configure an IoC container and hook it up to the CSL. However, it is not possible for the framework to validate the configuration as can be done while using the .NET configuration system, which as all kind of validation support in it.
I've done some reading on the service locator concept lately. It is a way of helping to reduce coupling, but requires code coupling to the locator - not the container backing the locator, but the locator itself. It is a tradeoff, but can be beneficial in the right situation.
One situation where it can be helpful is when you have code that does not make use of DI, such as legacy code - I am in this boat now. Pulling in required objects via SL, rather than directly creating them, allows the addition of some abstraction. I see it as an intermediate step between SL and DI/IoC.
If you have library code that is in need of services and this code could be hosted in the context of a larger framework/runtime then the framework / runtime would need to provide a mechanism where you can run some custom code on startup wherein you can initialize your container and register dependencies.
A good example of where CSL can be problematic is when using it in the context of MSCRM. You can have custom business logic executed by registering plugins which the MSCRM framework executes on certain events. The problem you run into is where do you run the registration logic since there is no "startup" event that you can subscribe to for setting up your DI container. Even if you could somehow setup your DI you would need to put the CSL and the DI libraries in the GAC since that is the only way to call out to 3rd party code from a plugin (one more item to add to your deployment checklist).
In scenarios such as this you are better off having your dependencies as constructor parameters that the calling code can initialize as it sees fit( via either constructor injection or manually "newing" up the appropriate interface implementation).
I use dependency injection through parameters and constructors extensively. I understand the principle to this degree and am happy with it. On my large projects, I end up with too many dependencies being injected (anything hitting double figures feels to big - I like the term 'macaroni code').
As such, I have been considering IOC containers. I have read a few articles on them and so far I have failed to see the benefit. I can see how it assists in sending groups of related objects or in getting the same type over and over again. I'm not sure how they would help me in my projects where I may have over a hundred classes implementing the same interface, and where I use all of them in varying orders.
So, can anybody point me at some good articles that not only describe the concepts of IOC containers (preferably without hyping one in particular), but also show in detail how they benefit me in this type of project and how they fit into the scope of a large architecture?
I would hope to see some non-language specific stuff but my preferred language if necessary is C#.
Inversion of Control is primarily about dependency management and providing testable code. From a classic approach, if a class has a dependency, the natural tendency is to give the class that has the dependency direct control over managing its dependencies. This usually means the class that has the dependency will 'new' up its dependencies within a constructor or on demand in its methods.
Inversion of Control is just that...it inverts what creates dependencies, externalizing that process and injecting them into the class that has the dependency. Usually, the entity that creates the dependencies is what we call an IoC container, which is responsible for not only creating and injecting dependencies, but also managing their lifetimes, determining their lifestyle (more on this in a sec), and also offering a variety of other capabilities. (This is based on Castle MicroKernel/Windsor, which is my IoC container of choice...its solidly written, very functional, and extensible. Other IoC containers exist that are simpler if you have simpler needs, like Ninject, Microsoft Unity, and Spring.NET.)
Consider that you have an internal application that can be used either in a local context or a remote context. Depending on some detectable factors, your application may need to load up "local" implementations of your services, and in other cases it may need to load up "remote" implementations of your services. If you follow the classic approach, and create your dependencies directly within the class that has those dependencies, then that class will be forced to break two very important rules about software development: Separation of Concerns and Single Responsibility. You cross boundaries of concern because your class is now concerned about both its intrinsic purpose, as well as the concern of determining which dependencies it should create and how. The class is also now responsible for many things, rather than a single thing, and has many reasons to change: its intrinsic purpose changes, the creation process for its dependencies changes, the way it finds remote dependencies changes, what dependencies its dependencies may need, etc.
By inverting your dependency management, you can improve your system architecture and maintain SoC and SR (or, possibly, achieve it when you were previously unable to due to dependencies.) Since an external entity, the IoC container, now controls how your dependencies are created and injected, you can also gain additional capabilities. The container can manage the life cycles of your dependencies, creating and destroying them in more flexible ways that can improve efficiency. You also gain the ability to manage the life styles of your objects. If you have a type of dependency that is created, used, and returned on a very frequent basis, but which have little or no state (say, factories), you can give them a pooled lifestyle, which will tell the container to automatically create an object pool for that particular dependency type. Many lifestyles exist, and a container like Castle Windsor will usually give you the ability to create your own.
The better IoC containers, like Castle Windsor, also provide a lot of extendability. By default, Windsor allows you to create instances of local types. Its possible to create Facilities that extend Windsor's type creation capabilities to dynamically create web service proxies and WCF service hosts on the fly, at runtime, eliminating the need to create them manually or statically with tools like svcutil (this is something I did myself just recently.) Many facilities exist to bring IoC support existing frameworks, like NHibernate, ActiveRecord, etc.
Finally, IoC enforces a style of coding that ensures unit testable code. One of the key factors in making code unit testable is externalizing dependency management. Without the ability to provide alternative (mocked, stubbed, etc.) dependencies, testing a single "unit" of code in isolation is a very difficult task, leaving integration testing the only alternative style of automated testing. Since IoC requires that your classes accept dependencies via injection (by constructor, property, or method), each class is usually, if not always, reduced to a single responsibility of properly separated concern, and fully mockable dependencies.
IoC = better architecture, greater cohesion, improved separation of concerns, classes that are easier to reduce to a single responsibility, easily configurable and interchangeable dependencies (often without requiring a recompilation of your code), flexible dependency life styles and life time management, and unit testable code. IoC is kind of a lifestyle...a philosophy, an approach to solving common problems and meeting critical best practices like SoC and SR.
Even (or rather, particularly) with hundreds of different implementations of a single interface, IoC has a lot to offer. It might take a while to get your head fully wrapped around it, but once you fully understand what IoC is and what it can do for you, you'll never want to do things any other way (except perhaps embedded systems development...)
If you have over a hundred of classes implementing a common interface, an IoC won't help very much, you need a factory.
That way, you may do the following:
public interface IMyInterface{
//...
}
public class Factory{
public static IMyInterface GetObject(string param){
// param is a parameter that will help the Factory decide what object to return
// (that is only an example, there may not be any parameter at all)
}
}
//...
// You do not depend on a particular implementation here
IMyInterface obj = Factory.GetObject("some param");
Inside the factory, you may use an IoC Container to retrieve the objects if you like, but you'll have to register each one of the classes that implement the given interface and associate them to some keys (and use those keys as parameters in GetObject() method).
An IoC is particularly useful when you have to retrieve objects that implement different interfaces:
IMyInteface myObject = Container.GetObject<IMyInterface>();
IMyOtherInterface myOtherObject Container.GetObject<IMyOtherInterface>();
ISomeOtherInterface someOtherObject = Container.GetObject<ISomeOtherInterface>();
See? Only one object to get several different type objects and no keys (the intefaces themselves are the keys). If you need an object to get several different object, but all implementing the same interface, an IoC won't help you very much.
In the past few weeks, I've taken the plunge from dependency-injection only to full-on inversion of control with Castle, so I understand where your question is coming from.
Some reasons why I wouldn't want to use an IOC container:
It's a small project that isn't going to grow that much. If there's a 1:1 relationship between constructors and calls to those constructors, using an IOC container isn't going to reduce the amount of code I have to write. You're not violating "don't repeat yourself" until you're finding yourself copying and pasting the exact same "var myObject = new MyClass(someInjectedDependency)" for a second time.
I may have to adapt existing code to facilitate being loaded into IOC containers. This probably isn't necessary until you get into some of the cooler Aspect-oriented programming features, but if you've forgotten to make a method virtual, sealed off that method's class, and it doesn't implement an interface, and you're uncomfortable making those changes because of existing dependencies, then making the switch isn't quite as appealing.
It adds an additional external dependency to my project -- and to my team. I can convince the rest of my team that structuring their code to allow DI is swell, but I'm currently the only one that knows how to work with Castle. On smaller, less complicated projects, this isn't going to be an issue. For the larger projects (that, ironically, would reap the most benefit from IOC containers), if I can't evangelize using an IOC container well enough, going maverick on my team isn't going to help anybody.
Some of the reasons why I wouldn't want to go back to plain DI:
I can add or take away logging to any number of my classes, without adding any sort of trace or logging statement. Having the ability for my classes to become interwoven with additional functionality without changing those classes, is extremely powerful. For example:
Logging: http://ayende.com/Blog/archive/2008/07/31/Logging--the-AOP-way.aspx
Transactions: http://www.codeproject.com/KB/architecture/introducingcastle.aspx (skip down to the Transaction section)
Castle, at least, is so helpful when wiring up classes to dependencies, that it would be painful to go back.
For example, missing a dependency with Castle:
"Can't create component 'MyClass' as
it has dependencies to be satisfied.
Service is waiting for the following
dependencies:
Services:
- IMyService which was not registered."
Missing a dependency without Castle:
Object reference is not set to an
instance of an object
Dead Last: The ability to swap injected services at runtime, by editing an Xml File. My perception is that this is the most tauted feature, but I see it as merely icing on the cake. I'd rather wire up all my services in code, but I'm sure I'll run into a headache in the future where my mind will be changed on this.
I will admit that -- being a newbie to IOC and Castle -- I'm probably only scratching the surface, but so far, I genuinely like what I see. I feel like the last few projects I've built with it are genuinely capable of reacting to the unpredictable changes that arise from day to day at my company, a feeling I've never quite had before.
Try these:
http://www.martinfowler.com/articles/injection.html
http://msdn.microsoft.com/en-us/library/aa973811.aspx
I have no links but can provide you with an example:
You have a web controller that needs to call a service which has a data access layer.
Now, I take it in your code you are constructing these objects your self at compile time. You are using a decent design pattern, but if you ever need to change the implementation of say the dao, you have to go into you code and remove the code that sets this dependency up, recompile / test/ deploy. But if you were to use a IOC container you would just change the class in the configuration and restart the application.
Jeremy Frey misses one of the biggest reasons for using an IOC container: it makes your code easier to mock and test.
Encouraging the use of interfaces has lots of other nice benefits: better layering, easier to dynamically generate proxies for things like declarative transactions, aspect-oriented programming and remoting.
If you think IOC is only good for replacing calls to "new", you don't get it.
IoC containers usually do the dependency injections which in some projects are not a big deal , but some of the frameworks that provide IoC containers offer other services that make it worth to use them.
Castle for example has a complete list of services besides an IoC container.Dynamic proxies ,Transaction management and NHibernate facilities are some of them.
Then I think you should consider IoC contianers as a part of an application framework.
Here's why I use an IoC container:
1.Writing unit tests will be easier .Actually you write different configurations to do different things
2.Adding different plugins for different scenarios(for different customers for example)
3.Intercepting classes to add different aspects to our code.
4.Since we are using NHibernate ,Transaction management and NHibernate facilites of Castle are very helpful in developing and maintaining our code .
It's like every technical aspects of our application is handled using an application framework and we have time to think about what customers really want.