Unity Best Practices with .NET Framework Classes - c#

I am just getting started using any DI/IoC toolset and have a very basic question. I am using Unity as we will also be using a number of the Enterprise Library blocks throughout the application.
The question that I have is around the dependencies to .NET framework classes. For instance, in one of the classes that I am working on currently, I need to create DirectoryInfo classes. According to my understanding, one of the best practices in DI is to never use the "new" keyword - as this introduces a hard dependency. So how should I get a new DirectoryInfo? Add it as an item to the container and have the container a dependency of the class? This seems that it would be impratical in real life usage as I would end up with the container being configurated with literally hudreds to thousands of framework classes. I would consider that to be a maintance nightmare.

The way I see it, the majority of classes in the BCL (base class library) are utiity classes, and generally don't have a ton of other dependancies other than straight up call graphs.
DI is designed to flatten the call graphs and prevent a lot of cross talk between classes. As such, and the fact that you can't do anything about the BCL chattiness, you might as well just accept them for what they are and go ahead and use new...
The only reason why you wouldn't want to, is for testing purposes. You're not typically testing BCL classes, but you might want to replace a BCL class with a mocked version so that it doesn't actually do whatever it is the class does, only pretend to do so.. and you can use stuff like SystemWrapper to do that if necessary.

I would say that is a little over engineering. I would think that the path of the DirectoryInfo to create should be passed in or have some sort of configuration repository( or service) class passed in that holds (or gives you a way to get) that data.

I wouldn't register low-level classes like DirectoryInfo with the container - it's not really a problem to use new in some places in your application, as long as this requirement doesn't 'leak out' into higher-level classes. So in this example, it probably makes sense to create and use the DirectoryInfo directly in a class, yet have instances of that class be injected into their dependents by the container.

First you can look as SystemWrapper library. It wraps DirectoryInfo object so you can mock it.
Next you can look at tutorial to TDD using Rhino Mocks and SystemWrapper. You can see there usage of DI; however, the example there doesn't use any of IoC.
Hope it helps.

Related

Writing a library that is dependency-injection "enabled"

I'm working on a project at the moment and it's going to be primarily library-based.
I want the library to be consumed using dependency injection, but I want the library to be largely agnostic towards the container being used.
I wrote a "bridge" library a while back to make this sort of thing easier, but I wasn't sure if this was actually the right approach? (library: https://github.com/clintkpearson/IoCBridge)
I don't want to reference the DI-technology (Ninject, Windsor etc) directly from my library as then it makes it inflexible for people using it.
There are a few other questions on SO in a similar vein but none of them seem to actually address the problem satisfactorily.
As a side note: I realise I could just make sure the library adheres to the general idiom and uses interfaces & ctor arguments for dependencies, and then just leave it up to the consuming app to register the types in the containers.
The only issue I can see with this (and correct me if I'm wrong) is that this requires the consuming app to actually know which types link to which interfaces, whether some need to be registered as singletons etc... and from a plug-and-play usage perspective that's pretty poor.
It's a bit controversial but I suggest using Poor Man's Injection. I'm not saying it's great but it has some valid use cases(just like Service Locator) under some constraints. It will require a little more maintenance but it will save you from depending another libray for IoC container registration. You should read Mark Seemann's article on the subject.
I recently implemented this approach in a very simple library of mine. Basically you write two constructors for the public classes of the library.
internal SitemapProvider(IActionResultFactory actionResultFactory, IBaseUrlProvider baseUrlProvider)
{
_actionResultFactory = actionResultFactory;
_baseUrlProvider = baseUrlProvider;
}
public SitemapProvider() : this(new ActionResultFactory(), new BaseUrlProvider()) { }
As you can see only the second constructor is public and you fill the dependencies yourself. This also provides encapsulation at assembly level. You can still test this class by adding a InternalsVisibleTo attribute to the assembly and use dependency injection in your library freely. The user can also create instances with new keyword or add this class's interface to their IoC registration.
I don't know if there's a widely adopted IoC container registration library in .NET. I thought about writing one myself but each container has their unique features and it gets more complicated with object life cycles. Also people will be uneasy about depending on another library for this.
A good DI implementation should enable DI on any object, regardless of the latter being DI-agnostic or not.
Prism is a bad example, as the last time I used it (2 years ago) it required objects to be DI-agnostic by enforcing use of the [Injection] attribute. A good non-DI-agnostic example is Spring Framework (extremely popular DI framework for Java, has a .NET port called Spring.NET), which allows enabling DI via so-called context files - these are xml files that describe dependencies. The latter need not be part of your library, leaving it as a completely independent dll file.
The example of Spring can tell you that you should not have any specific configuration, prerequisites or patterns to follow in order to make an object injectable, or allow objects to be injected to it, besides the programming to interfaces paradigm, and allowing programmatic access to suitable constructors and property setters.
This does not mean that any DI framework should support manipulation of plain CLR (.NET) objects, a.k.a. POCO-s. Some frameworks rely only on their specific mechanisms and may not be suitable to use with DI-independent code. Usually, they would require direct dependency on the DI framework to the library, which I think you want to (and probably should) avoid.
I think you have slightly misinterpeted the scope of Dependency Injection. DI is a pattern, a subset of IoC, and IoC containers make DI easy and convenient - they assist with dependency resolution. IoC can be categorised as a superset of several methodologies, of which DI is one part.
You do not need IoC frameworks in order to make Dependency Injection work.
If you really insist on using an IoC container instead of leveraging regular DI (i.e. constructor parameters or mandatory property setting) then you should nominate the container/framework, don't try to be all things to all people by trying to kludge together adapters or bridges. Be cautious about over-engineering. A library by its very definition means it has a limited and well defined set of functionality, therefore it should not need a large amount of dependencies injected.
They'll most likely want to implement their own versions of some of the interfaces
You don't need an IoC framework to achieve this. If your constructors have their parameters defined as interfaces then in effect you've already achieved DI - the dependency is injected at construction time and you know nothing about the actual concrete implementation of it. Let the calling code worry about the nitty gritty details of which implementation of that interface it wants to pass in.

What good is Unity DI in MVC?

I'm slightly new to Unity and IoC, but not to MVC. I've been reading and reading about using Unity with MVC and the only really useful thing I'm consistently seeing is the ability to get free DI with the controllers.
To go from this:
public HomeController() : this(new UserRepository())
{
}
public HomeController(IUserRepository userRepository)
{
this.UserRepository = userRepository;
}
To this:
public HomeController(IUserRepository userRepository)
{
this.UserRepository = userRepository;
}
Basically, allowing me to drop the no parameter constructor. This is great and all and I'm going to implement this for sure, but it doesn't seem like it's anything really that great for all the hype about IoC libraries. Going the way of using Unity as a service locator sounds compelling, but many would argue it's an anti pattern.
So my question is, with service locating out of the question and some DI opportunities with Views and Filters, is there anything else I gain from using Unity? I just want to make sure I'm not missing something wonderful like free DI support for all class constructors.
EDIT:
I understand the testability purpose behind using Unity DI with MVC controllers. But all I would have to do is add that one extra little constructor, nix Unity, and I could UnitTest just the same. Where is the great benefit in registering your repository types and having a custom controller factory when the alternative is simpler? The alternative being native DI. I guess I'm really wondering what is so great about Unity (or any IoC library) besides Service Locating which is bad. Is free Controller DI really the ONLY thing I get from Unity?
A good IoC container not only creates the concrete class for you, it examines the couplings between that type and other types. If there are additional dependencies, it resolves them and creates instances of all of the classes that are required.
You can do fancy things like conditional binding. Here's an example using Ninject (my preferred IoC):
ninjectKernel.Bind<IValueCalculator>().To<LinqValueCalculator>();
ninjectKernel.Bind<IValueCalculator>().To<IterativeValueCalculator().WhenInjectedInto<LimitShoppingCart>();
What ninject is doing here is creating an instance of IterativeValueCalculator when injecting into LimitShoppingCart and an instance of LinqValueCalulator for any other injection.
Greatest benefit is separation of concern (decoupling) and testability.
Regarding why Service Locator is considered bad(by some guys) you can read this blog-post by Mark Seeman.
Answering on your question What is so good in Unity I can say that apart from all the testability, loosely-coupling and other blah-blah-blah-s everyone is talking about you can use such awesome feature like Unity's Interception which allows to do some AOP-like things. I've used it in some of last projects and liked it pretty much. Strongly recommended!
p.s. Seems like Castle Windsor DI container has similar feature as well(called Interceptors). Other containers - not sure.
Besides testing (which is a huge benefit and should not be under estimated), dependency injection allows:
Maintainability: The ability to alter the behavior of your code with a single change.
If you decide to change the class that retrieves your users across all your controllers/services etc. without dependency injection, you need to update each and every constructor plus any other random new instances that are being created, provided you remember where each one lives. DI allows you to change one definition that is then used across all implementations.
Scope: With a single line of code you can alter your implementation to create a singleton, a class that is only created on each new web request or on each new thread
Readability: The use of dependency injection means that all your concrete classes are defined in one place. As a developer coming onto a project, I can quickly and easily see exactly which concrete classes are mapped to which interfaces and know that there are no hidden implemetations. It means I can not only read the code better but empowers me to have the confidence to develop against the code
Design: I believe using dependency injection helps create well designed code. You automatically code to interfaces, your code becomes cleaner because you haven't got strange blocks of code to help you test
And let's no forget...
Testing: Testing is huge! Dependency injection allows you to test your code without having to write code specifically for tests. Ok, you can create a new constructor, but what is stop anyone else using that constructor for a purpose it has not been intended for. What if another developer comes along six months later and adds production logic to your 'test' constructor. Ok, so you can make it internal but this can still be used by production code. Why give them the option.
My experience with IoC frameworks has been largely around Ninject. As such, the above is based on what I know of Ninject, however the same principles should remain the same across other frameworks.
No the main point is that you then have a mapping somewhere that specifies the concrete type of IUserRepository. And the reason that you might like that is that you can then create a UnitTest that specifies a mocked version of IUserRepository and execute your tests without changing anything in your controller.
Testability mostly, but id suggest looking at Ninject, it plays well with MVC. To get the full benefits of IOC you should really be combining this with Moq or a similar mocking framework.
Besides the testability, I think you can also add the extensibility as one of the advantages. One of the best practices of Software Development is "working with abstractions, not with implementations" and, while you can do it in several ways, Unity provides a very extensible and easy way to achieve this. By using this, you will be creating abstractions that will define the contracts that your component must comply. Say that you want to change totally the Repository that your application currently uses. With DI you just "swap" the component without the need of changing a single line of code on your application. If you are using hard references, you might need to change and recompile your application because of a component that is external to it (In a different layer)
So, bottom line, IMHO, using DI helps you to have pluggable components in your application and have a SOLID application design

Is an IoC container an overkill for a very simple framework

I am creating a library that integrates our product with another 3rd party product.
The design being used involves an interface that abstracts the core operations i'd like to integrate to our product, such that when a new 3rd party API is released, we would transparently switch to using it instead of the old one without modifying existing code.
To this end, the actual code that will return a concrete instance of the object interacting with 3rd party API needs to make a decision on to "which implementation to select".
For the simple needs, i assume an entry in the configuration file would suffice to say the fully qualified implementing class name.
Should i use an IoC container in this case, or should i use a Factory Method pattern and code it myself? (use reflection to instantiate the object, etc).
What are the pros and cons for this? Am i missing anything?
Quoting Mark Seemann:
Applications should depend on containers. Frameworks should not.
An IoC container sounds like overkill for your problem. If you have only one dependency to inject, doing it via the config file should be just fine.
Any IoC container is never overkill. You WILL eventually expand on this app if it's successful, or at the least used regularly and you will get requests to add more.
I'm a Castle user, and it's darn easy to add Castle with NuGet then just create a new WindsorContainer() in the startup and register your interfaces and class.
It's like asking if TDD is overkill to make a simple app. You should always (if you can) TDD the app, and use interfaces over concrete in your classes. IoC is too easy to setup now that you're only adding a few lines of code, so why not? it will be much harder later on if you didn't originally use an IoC container and you have intefaces and newed up classes all over your project to organize them all back into an IoC container.

Is it a good practice to create wrapper over 3rd party components like MS enterprise Library or Log4net?

This is more like a good practise question. I want to offer different generic libraries like Logging, caching etc. There are lots of third party libraries like MS enterprise library, log4Net, NCache etc for these.
I wanted to know if its a good practise to use these directly or create wrapper over each service and Use a DI to inject that service in the code.
regards
This is subjective, but also depends on the library.
For instance, log4net or NHibernate have strictly interface based API's. There is no need to wrap them. There are other libraries which will make your classes hard to test if you don't have any interfaces in between. There it might be advisable to write a clean interface.
It is sometimes good advise to let only a small part of the code access API's like for instance NHibernate or a GUI library. (Note: This is not wrapping, this is building abstraction layers.) On the other side, it does not make sense to reduce access to log4net calls, this will be used in the whole application.
log4net is probably a good example where wrapping is just overkill. Some people suffer of "wrappitis", which is an anti-pattern (sometimes referred to as "wrapping wool in cotton") and just produces more work. Log4net has such a simple API and is highly customizable, they made it better then your wrapper most probably will be.
You will find out that wrapping a library will not allow you to just exchange it with another product. Upgrading to newer versions will also not be easier, rather you need to update your wrapper for nothing.
If you want to be able to swap implementations of those concepts, creating a wrapper is the way to go.
For logging there already is something like that available Common.Logging.
Using wrapping interfaces does indeed make unit testing much easier, but what's equally important, it makes it possible to use mocks.
As an example, the PRISM framework for Silverlight and WPF defines an ILoggerFacade interface with a simple method named Log. Using this concept, here's how I define a mocked logger (using Moq) in my unit tests:
var loggerMock = new Mock<ILoggerFacade>(MockBehavior.Loose);
loggerMock.Setup(lg => lg.Log(It.IsAny<string>(), It.IsAny<Category>(), It.IsAny<Priority>()))
.Callback((string s, Category c, Priority p) => Debug.Write(string.Format("**** {0}", s)));
Later you can pass loggerMock.Object to the tested object via constructor or property, or configure a dependency injector that uses it.
It sounds like you are thinking of wrapping the logging implementation and then sharing with different teams. Based on that here are some pros and cons.
Advantages of Wrapping
Can abstract away interfaces and
dependencies from implementation.
This provides some measure of protection against breaking changes
in the implementation library.
Can
make standards enforcement easier and
align different project's
implementations.
Disadvantages of Wrapping
Additional development work.
Potential additional documentation
work (how to use new library).
More
likely to have bugs in the wrapping
code than mature library. (Deploying your bug fixes can be a big headache!)
Developers
need to learn new library (even if
very simple).
Can sometimes be
difficult to wrap an entire library
to avoid leaking implementation
interfaces. These types of wrapper
classes usually offer no value other
than obfuscation. e.g. MyDbCommand
class wraps some other DbCommand
class.
I've wrapped part of Enterprise Library before and I didn't think it added much value. I think you would be better off:
Documenting the best practices and usage
Providing a reference implementation
Verifying compliance (code reviews
etc.)
This is more of a subjective question but IMO it's a good thing to wrap any application/library specific usage into a service model design that has well thought out interfaces so you can easily use DI and later if you ever need to switch from say EntLib Data Application Block to NHibernate you don't need to re-architect you're whole application.
I generally create a "helper" or "service" class that can be called statically to wrapper common functionality of these libraries.
I don't know that there is a tremendous amount of risk in directly referencing/calling these, since they are definitely mature projects (at least EntLib and Log4Net), but having a wrapper isolates you from the confusion of version change, etc. and gives you more options in the future, at a fairly low cost.
I think it's better to use a wrapper, personally, simply because these are things you don't want to be running when your unit tests run (assuming you're unit testing also).
Yes if being able to replace the implementation is a requirement now or in a reasonable future.
No otherwise.
Defining the interface that your application will use for all logging/enterprising/... purposes is the core work here. Writing the wrapper is merely a way to make the compiler enforce use of this interface rather than the actual implementation.

Can anyone explain to me, at length, how to use IOC containers?

I use dependency injection through parameters and constructors extensively. I understand the principle to this degree and am happy with it. On my large projects, I end up with too many dependencies being injected (anything hitting double figures feels to big - I like the term 'macaroni code').
As such, I have been considering IOC containers. I have read a few articles on them and so far I have failed to see the benefit. I can see how it assists in sending groups of related objects or in getting the same type over and over again. I'm not sure how they would help me in my projects where I may have over a hundred classes implementing the same interface, and where I use all of them in varying orders.
So, can anybody point me at some good articles that not only describe the concepts of IOC containers (preferably without hyping one in particular), but also show in detail how they benefit me in this type of project and how they fit into the scope of a large architecture?
I would hope to see some non-language specific stuff but my preferred language if necessary is C#.
Inversion of Control is primarily about dependency management and providing testable code. From a classic approach, if a class has a dependency, the natural tendency is to give the class that has the dependency direct control over managing its dependencies. This usually means the class that has the dependency will 'new' up its dependencies within a constructor or on demand in its methods.
Inversion of Control is just that...it inverts what creates dependencies, externalizing that process and injecting them into the class that has the dependency. Usually, the entity that creates the dependencies is what we call an IoC container, which is responsible for not only creating and injecting dependencies, but also managing their lifetimes, determining their lifestyle (more on this in a sec), and also offering a variety of other capabilities. (This is based on Castle MicroKernel/Windsor, which is my IoC container of choice...its solidly written, very functional, and extensible. Other IoC containers exist that are simpler if you have simpler needs, like Ninject, Microsoft Unity, and Spring.NET.)
Consider that you have an internal application that can be used either in a local context or a remote context. Depending on some detectable factors, your application may need to load up "local" implementations of your services, and in other cases it may need to load up "remote" implementations of your services. If you follow the classic approach, and create your dependencies directly within the class that has those dependencies, then that class will be forced to break two very important rules about software development: Separation of Concerns and Single Responsibility. You cross boundaries of concern because your class is now concerned about both its intrinsic purpose, as well as the concern of determining which dependencies it should create and how. The class is also now responsible for many things, rather than a single thing, and has many reasons to change: its intrinsic purpose changes, the creation process for its dependencies changes, the way it finds remote dependencies changes, what dependencies its dependencies may need, etc.
By inverting your dependency management, you can improve your system architecture and maintain SoC and SR (or, possibly, achieve it when you were previously unable to due to dependencies.) Since an external entity, the IoC container, now controls how your dependencies are created and injected, you can also gain additional capabilities. The container can manage the life cycles of your dependencies, creating and destroying them in more flexible ways that can improve efficiency. You also gain the ability to manage the life styles of your objects. If you have a type of dependency that is created, used, and returned on a very frequent basis, but which have little or no state (say, factories), you can give them a pooled lifestyle, which will tell the container to automatically create an object pool for that particular dependency type. Many lifestyles exist, and a container like Castle Windsor will usually give you the ability to create your own.
The better IoC containers, like Castle Windsor, also provide a lot of extendability. By default, Windsor allows you to create instances of local types. Its possible to create Facilities that extend Windsor's type creation capabilities to dynamically create web service proxies and WCF service hosts on the fly, at runtime, eliminating the need to create them manually or statically with tools like svcutil (this is something I did myself just recently.) Many facilities exist to bring IoC support existing frameworks, like NHibernate, ActiveRecord, etc.
Finally, IoC enforces a style of coding that ensures unit testable code. One of the key factors in making code unit testable is externalizing dependency management. Without the ability to provide alternative (mocked, stubbed, etc.) dependencies, testing a single "unit" of code in isolation is a very difficult task, leaving integration testing the only alternative style of automated testing. Since IoC requires that your classes accept dependencies via injection (by constructor, property, or method), each class is usually, if not always, reduced to a single responsibility of properly separated concern, and fully mockable dependencies.
IoC = better architecture, greater cohesion, improved separation of concerns, classes that are easier to reduce to a single responsibility, easily configurable and interchangeable dependencies (often without requiring a recompilation of your code), flexible dependency life styles and life time management, and unit testable code. IoC is kind of a lifestyle...a philosophy, an approach to solving common problems and meeting critical best practices like SoC and SR.
Even (or rather, particularly) with hundreds of different implementations of a single interface, IoC has a lot to offer. It might take a while to get your head fully wrapped around it, but once you fully understand what IoC is and what it can do for you, you'll never want to do things any other way (except perhaps embedded systems development...)
If you have over a hundred of classes implementing a common interface, an IoC won't help very much, you need a factory.
That way, you may do the following:
public interface IMyInterface{
//...
}
public class Factory{
public static IMyInterface GetObject(string param){
// param is a parameter that will help the Factory decide what object to return
// (that is only an example, there may not be any parameter at all)
}
}
//...
// You do not depend on a particular implementation here
IMyInterface obj = Factory.GetObject("some param");
Inside the factory, you may use an IoC Container to retrieve the objects if you like, but you'll have to register each one of the classes that implement the given interface and associate them to some keys (and use those keys as parameters in GetObject() method).
An IoC is particularly useful when you have to retrieve objects that implement different interfaces:
IMyInteface myObject = Container.GetObject<IMyInterface>();
IMyOtherInterface myOtherObject Container.GetObject<IMyOtherInterface>();
ISomeOtherInterface someOtherObject = Container.GetObject<ISomeOtherInterface>();
See? Only one object to get several different type objects and no keys (the intefaces themselves are the keys). If you need an object to get several different object, but all implementing the same interface, an IoC won't help you very much.
In the past few weeks, I've taken the plunge from dependency-injection only to full-on inversion of control with Castle, so I understand where your question is coming from.
Some reasons why I wouldn't want to use an IOC container:
It's a small project that isn't going to grow that much. If there's a 1:1 relationship between constructors and calls to those constructors, using an IOC container isn't going to reduce the amount of code I have to write. You're not violating "don't repeat yourself" until you're finding yourself copying and pasting the exact same "var myObject = new MyClass(someInjectedDependency)" for a second time.
I may have to adapt existing code to facilitate being loaded into IOC containers. This probably isn't necessary until you get into some of the cooler Aspect-oriented programming features, but if you've forgotten to make a method virtual, sealed off that method's class, and it doesn't implement an interface, and you're uncomfortable making those changes because of existing dependencies, then making the switch isn't quite as appealing.
It adds an additional external dependency to my project -- and to my team. I can convince the rest of my team that structuring their code to allow DI is swell, but I'm currently the only one that knows how to work with Castle. On smaller, less complicated projects, this isn't going to be an issue. For the larger projects (that, ironically, would reap the most benefit from IOC containers), if I can't evangelize using an IOC container well enough, going maverick on my team isn't going to help anybody.
Some of the reasons why I wouldn't want to go back to plain DI:
I can add or take away logging to any number of my classes, without adding any sort of trace or logging statement. Having the ability for my classes to become interwoven with additional functionality without changing those classes, is extremely powerful. For example:
Logging: http://ayende.com/Blog/archive/2008/07/31/Logging--the-AOP-way.aspx
Transactions: http://www.codeproject.com/KB/architecture/introducingcastle.aspx (skip down to the Transaction section)
Castle, at least, is so helpful when wiring up classes to dependencies, that it would be painful to go back.
For example, missing a dependency with Castle:
"Can't create component 'MyClass' as
it has dependencies to be satisfied.
Service is waiting for the following
dependencies:
Services:
- IMyService which was not registered."
Missing a dependency without Castle:
Object reference is not set to an
instance of an object
Dead Last: The ability to swap injected services at runtime, by editing an Xml File. My perception is that this is the most tauted feature, but I see it as merely icing on the cake. I'd rather wire up all my services in code, but I'm sure I'll run into a headache in the future where my mind will be changed on this.
I will admit that -- being a newbie to IOC and Castle -- I'm probably only scratching the surface, but so far, I genuinely like what I see. I feel like the last few projects I've built with it are genuinely capable of reacting to the unpredictable changes that arise from day to day at my company, a feeling I've never quite had before.
Try these:
http://www.martinfowler.com/articles/injection.html
http://msdn.microsoft.com/en-us/library/aa973811.aspx
I have no links but can provide you with an example:
You have a web controller that needs to call a service which has a data access layer.
Now, I take it in your code you are constructing these objects your self at compile time. You are using a decent design pattern, but if you ever need to change the implementation of say the dao, you have to go into you code and remove the code that sets this dependency up, recompile / test/ deploy. But if you were to use a IOC container you would just change the class in the configuration and restart the application.
Jeremy Frey misses one of the biggest reasons for using an IOC container: it makes your code easier to mock and test.
Encouraging the use of interfaces has lots of other nice benefits: better layering, easier to dynamically generate proxies for things like declarative transactions, aspect-oriented programming and remoting.
If you think IOC is only good for replacing calls to "new", you don't get it.
IoC containers usually do the dependency injections which in some projects are not a big deal , but some of the frameworks that provide IoC containers offer other services that make it worth to use them.
Castle for example has a complete list of services besides an IoC container.Dynamic proxies ,Transaction management and NHibernate facilities are some of them.
Then I think you should consider IoC contianers as a part of an application framework.
Here's why I use an IoC container:
1.Writing unit tests will be easier .Actually you write different configurations to do different things
2.Adding different plugins for different scenarios(for different customers for example)
3.Intercepting classes to add different aspects to our code.
4.Since we are using NHibernate ,Transaction management and NHibernate facilites of Castle are very helpful in developing and maintaining our code .
It's like every technical aspects of our application is handled using an application framework and we have time to think about what customers really want.

Categories

Resources