Can anyone explain to me, at length, how to use IOC containers? - c#

I use dependency injection through parameters and constructors extensively. I understand the principle to this degree and am happy with it. On my large projects, I end up with too many dependencies being injected (anything hitting double figures feels to big - I like the term 'macaroni code').
As such, I have been considering IOC containers. I have read a few articles on them and so far I have failed to see the benefit. I can see how it assists in sending groups of related objects or in getting the same type over and over again. I'm not sure how they would help me in my projects where I may have over a hundred classes implementing the same interface, and where I use all of them in varying orders.
So, can anybody point me at some good articles that not only describe the concepts of IOC containers (preferably without hyping one in particular), but also show in detail how they benefit me in this type of project and how they fit into the scope of a large architecture?
I would hope to see some non-language specific stuff but my preferred language if necessary is C#.

Inversion of Control is primarily about dependency management and providing testable code. From a classic approach, if a class has a dependency, the natural tendency is to give the class that has the dependency direct control over managing its dependencies. This usually means the class that has the dependency will 'new' up its dependencies within a constructor or on demand in its methods.
Inversion of Control is just that...it inverts what creates dependencies, externalizing that process and injecting them into the class that has the dependency. Usually, the entity that creates the dependencies is what we call an IoC container, which is responsible for not only creating and injecting dependencies, but also managing their lifetimes, determining their lifestyle (more on this in a sec), and also offering a variety of other capabilities. (This is based on Castle MicroKernel/Windsor, which is my IoC container of choice...its solidly written, very functional, and extensible. Other IoC containers exist that are simpler if you have simpler needs, like Ninject, Microsoft Unity, and Spring.NET.)
Consider that you have an internal application that can be used either in a local context or a remote context. Depending on some detectable factors, your application may need to load up "local" implementations of your services, and in other cases it may need to load up "remote" implementations of your services. If you follow the classic approach, and create your dependencies directly within the class that has those dependencies, then that class will be forced to break two very important rules about software development: Separation of Concerns and Single Responsibility. You cross boundaries of concern because your class is now concerned about both its intrinsic purpose, as well as the concern of determining which dependencies it should create and how. The class is also now responsible for many things, rather than a single thing, and has many reasons to change: its intrinsic purpose changes, the creation process for its dependencies changes, the way it finds remote dependencies changes, what dependencies its dependencies may need, etc.
By inverting your dependency management, you can improve your system architecture and maintain SoC and SR (or, possibly, achieve it when you were previously unable to due to dependencies.) Since an external entity, the IoC container, now controls how your dependencies are created and injected, you can also gain additional capabilities. The container can manage the life cycles of your dependencies, creating and destroying them in more flexible ways that can improve efficiency. You also gain the ability to manage the life styles of your objects. If you have a type of dependency that is created, used, and returned on a very frequent basis, but which have little or no state (say, factories), you can give them a pooled lifestyle, which will tell the container to automatically create an object pool for that particular dependency type. Many lifestyles exist, and a container like Castle Windsor will usually give you the ability to create your own.
The better IoC containers, like Castle Windsor, also provide a lot of extendability. By default, Windsor allows you to create instances of local types. Its possible to create Facilities that extend Windsor's type creation capabilities to dynamically create web service proxies and WCF service hosts on the fly, at runtime, eliminating the need to create them manually or statically with tools like svcutil (this is something I did myself just recently.) Many facilities exist to bring IoC support existing frameworks, like NHibernate, ActiveRecord, etc.
Finally, IoC enforces a style of coding that ensures unit testable code. One of the key factors in making code unit testable is externalizing dependency management. Without the ability to provide alternative (mocked, stubbed, etc.) dependencies, testing a single "unit" of code in isolation is a very difficult task, leaving integration testing the only alternative style of automated testing. Since IoC requires that your classes accept dependencies via injection (by constructor, property, or method), each class is usually, if not always, reduced to a single responsibility of properly separated concern, and fully mockable dependencies.
IoC = better architecture, greater cohesion, improved separation of concerns, classes that are easier to reduce to a single responsibility, easily configurable and interchangeable dependencies (often without requiring a recompilation of your code), flexible dependency life styles and life time management, and unit testable code. IoC is kind of a lifestyle...a philosophy, an approach to solving common problems and meeting critical best practices like SoC and SR.
Even (or rather, particularly) with hundreds of different implementations of a single interface, IoC has a lot to offer. It might take a while to get your head fully wrapped around it, but once you fully understand what IoC is and what it can do for you, you'll never want to do things any other way (except perhaps embedded systems development...)

If you have over a hundred of classes implementing a common interface, an IoC won't help very much, you need a factory.
That way, you may do the following:
public interface IMyInterface{
//...
}
public class Factory{
public static IMyInterface GetObject(string param){
// param is a parameter that will help the Factory decide what object to return
// (that is only an example, there may not be any parameter at all)
}
}
//...
// You do not depend on a particular implementation here
IMyInterface obj = Factory.GetObject("some param");
Inside the factory, you may use an IoC Container to retrieve the objects if you like, but you'll have to register each one of the classes that implement the given interface and associate them to some keys (and use those keys as parameters in GetObject() method).
An IoC is particularly useful when you have to retrieve objects that implement different interfaces:
IMyInteface myObject = Container.GetObject<IMyInterface>();
IMyOtherInterface myOtherObject Container.GetObject<IMyOtherInterface>();
ISomeOtherInterface someOtherObject = Container.GetObject<ISomeOtherInterface>();
See? Only one object to get several different type objects and no keys (the intefaces themselves are the keys). If you need an object to get several different object, but all implementing the same interface, an IoC won't help you very much.

In the past few weeks, I've taken the plunge from dependency-injection only to full-on inversion of control with Castle, so I understand where your question is coming from.
Some reasons why I wouldn't want to use an IOC container:
It's a small project that isn't going to grow that much. If there's a 1:1 relationship between constructors and calls to those constructors, using an IOC container isn't going to reduce the amount of code I have to write. You're not violating "don't repeat yourself" until you're finding yourself copying and pasting the exact same "var myObject = new MyClass(someInjectedDependency)" for a second time.
I may have to adapt existing code to facilitate being loaded into IOC containers. This probably isn't necessary until you get into some of the cooler Aspect-oriented programming features, but if you've forgotten to make a method virtual, sealed off that method's class, and it doesn't implement an interface, and you're uncomfortable making those changes because of existing dependencies, then making the switch isn't quite as appealing.
It adds an additional external dependency to my project -- and to my team. I can convince the rest of my team that structuring their code to allow DI is swell, but I'm currently the only one that knows how to work with Castle. On smaller, less complicated projects, this isn't going to be an issue. For the larger projects (that, ironically, would reap the most benefit from IOC containers), if I can't evangelize using an IOC container well enough, going maverick on my team isn't going to help anybody.
Some of the reasons why I wouldn't want to go back to plain DI:
I can add or take away logging to any number of my classes, without adding any sort of trace or logging statement. Having the ability for my classes to become interwoven with additional functionality without changing those classes, is extremely powerful. For example:
Logging: http://ayende.com/Blog/archive/2008/07/31/Logging--the-AOP-way.aspx
Transactions: http://www.codeproject.com/KB/architecture/introducingcastle.aspx (skip down to the Transaction section)
Castle, at least, is so helpful when wiring up classes to dependencies, that it would be painful to go back.
For example, missing a dependency with Castle:
"Can't create component 'MyClass' as
it has dependencies to be satisfied.
Service is waiting for the following
dependencies:
Services:
- IMyService which was not registered."
Missing a dependency without Castle:
Object reference is not set to an
instance of an object
Dead Last: The ability to swap injected services at runtime, by editing an Xml File. My perception is that this is the most tauted feature, but I see it as merely icing on the cake. I'd rather wire up all my services in code, but I'm sure I'll run into a headache in the future where my mind will be changed on this.
I will admit that -- being a newbie to IOC and Castle -- I'm probably only scratching the surface, but so far, I genuinely like what I see. I feel like the last few projects I've built with it are genuinely capable of reacting to the unpredictable changes that arise from day to day at my company, a feeling I've never quite had before.

Try these:
http://www.martinfowler.com/articles/injection.html
http://msdn.microsoft.com/en-us/library/aa973811.aspx

I have no links but can provide you with an example:
You have a web controller that needs to call a service which has a data access layer.
Now, I take it in your code you are constructing these objects your self at compile time. You are using a decent design pattern, but if you ever need to change the implementation of say the dao, you have to go into you code and remove the code that sets this dependency up, recompile / test/ deploy. But if you were to use a IOC container you would just change the class in the configuration and restart the application.

Jeremy Frey misses one of the biggest reasons for using an IOC container: it makes your code easier to mock and test.
Encouraging the use of interfaces has lots of other nice benefits: better layering, easier to dynamically generate proxies for things like declarative transactions, aspect-oriented programming and remoting.
If you think IOC is only good for replacing calls to "new", you don't get it.

IoC containers usually do the dependency injections which in some projects are not a big deal , but some of the frameworks that provide IoC containers offer other services that make it worth to use them.
Castle for example has a complete list of services besides an IoC container.Dynamic proxies ,Transaction management and NHibernate facilities are some of them.
Then I think you should consider IoC contianers as a part of an application framework.
Here's why I use an IoC container:
1.Writing unit tests will be easier .Actually you write different configurations to do different things
2.Adding different plugins for different scenarios(for different customers for example)
3.Intercepting classes to add different aspects to our code.
4.Since we are using NHibernate ,Transaction management and NHibernate facilites of Castle are very helpful in developing and maintaining our code .
It's like every technical aspects of our application is handled using an application framework and we have time to think about what customers really want.

Related

Writing a library that is dependency-injection "enabled"

I'm working on a project at the moment and it's going to be primarily library-based.
I want the library to be consumed using dependency injection, but I want the library to be largely agnostic towards the container being used.
I wrote a "bridge" library a while back to make this sort of thing easier, but I wasn't sure if this was actually the right approach? (library: https://github.com/clintkpearson/IoCBridge)
I don't want to reference the DI-technology (Ninject, Windsor etc) directly from my library as then it makes it inflexible for people using it.
There are a few other questions on SO in a similar vein but none of them seem to actually address the problem satisfactorily.
As a side note: I realise I could just make sure the library adheres to the general idiom and uses interfaces & ctor arguments for dependencies, and then just leave it up to the consuming app to register the types in the containers.
The only issue I can see with this (and correct me if I'm wrong) is that this requires the consuming app to actually know which types link to which interfaces, whether some need to be registered as singletons etc... and from a plug-and-play usage perspective that's pretty poor.
It's a bit controversial but I suggest using Poor Man's Injection. I'm not saying it's great but it has some valid use cases(just like Service Locator) under some constraints. It will require a little more maintenance but it will save you from depending another libray for IoC container registration. You should read Mark Seemann's article on the subject.
I recently implemented this approach in a very simple library of mine. Basically you write two constructors for the public classes of the library.
internal SitemapProvider(IActionResultFactory actionResultFactory, IBaseUrlProvider baseUrlProvider)
{
_actionResultFactory = actionResultFactory;
_baseUrlProvider = baseUrlProvider;
}
public SitemapProvider() : this(new ActionResultFactory(), new BaseUrlProvider()) { }
As you can see only the second constructor is public and you fill the dependencies yourself. This also provides encapsulation at assembly level. You can still test this class by adding a InternalsVisibleTo attribute to the assembly and use dependency injection in your library freely. The user can also create instances with new keyword or add this class's interface to their IoC registration.
I don't know if there's a widely adopted IoC container registration library in .NET. I thought about writing one myself but each container has their unique features and it gets more complicated with object life cycles. Also people will be uneasy about depending on another library for this.
A good DI implementation should enable DI on any object, regardless of the latter being DI-agnostic or not.
Prism is a bad example, as the last time I used it (2 years ago) it required objects to be DI-agnostic by enforcing use of the [Injection] attribute. A good non-DI-agnostic example is Spring Framework (extremely popular DI framework for Java, has a .NET port called Spring.NET), which allows enabling DI via so-called context files - these are xml files that describe dependencies. The latter need not be part of your library, leaving it as a completely independent dll file.
The example of Spring can tell you that you should not have any specific configuration, prerequisites or patterns to follow in order to make an object injectable, or allow objects to be injected to it, besides the programming to interfaces paradigm, and allowing programmatic access to suitable constructors and property setters.
This does not mean that any DI framework should support manipulation of plain CLR (.NET) objects, a.k.a. POCO-s. Some frameworks rely only on their specific mechanisms and may not be suitable to use with DI-independent code. Usually, they would require direct dependency on the DI framework to the library, which I think you want to (and probably should) avoid.
I think you have slightly misinterpeted the scope of Dependency Injection. DI is a pattern, a subset of IoC, and IoC containers make DI easy and convenient - they assist with dependency resolution. IoC can be categorised as a superset of several methodologies, of which DI is one part.
You do not need IoC frameworks in order to make Dependency Injection work.
If you really insist on using an IoC container instead of leveraging regular DI (i.e. constructor parameters or mandatory property setting) then you should nominate the container/framework, don't try to be all things to all people by trying to kludge together adapters or bridges. Be cautious about over-engineering. A library by its very definition means it has a limited and well defined set of functionality, therefore it should not need a large amount of dependencies injected.
They'll most likely want to implement their own versions of some of the interfaces
You don't need an IoC framework to achieve this. If your constructors have their parameters defined as interfaces then in effect you've already achieved DI - the dependency is injected at construction time and you know nothing about the actual concrete implementation of it. Let the calling code worry about the nitty gritty details of which implementation of that interface it wants to pass in.

What good is Unity DI in MVC?

I'm slightly new to Unity and IoC, but not to MVC. I've been reading and reading about using Unity with MVC and the only really useful thing I'm consistently seeing is the ability to get free DI with the controllers.
To go from this:
public HomeController() : this(new UserRepository())
{
}
public HomeController(IUserRepository userRepository)
{
this.UserRepository = userRepository;
}
To this:
public HomeController(IUserRepository userRepository)
{
this.UserRepository = userRepository;
}
Basically, allowing me to drop the no parameter constructor. This is great and all and I'm going to implement this for sure, but it doesn't seem like it's anything really that great for all the hype about IoC libraries. Going the way of using Unity as a service locator sounds compelling, but many would argue it's an anti pattern.
So my question is, with service locating out of the question and some DI opportunities with Views and Filters, is there anything else I gain from using Unity? I just want to make sure I'm not missing something wonderful like free DI support for all class constructors.
EDIT:
I understand the testability purpose behind using Unity DI with MVC controllers. But all I would have to do is add that one extra little constructor, nix Unity, and I could UnitTest just the same. Where is the great benefit in registering your repository types and having a custom controller factory when the alternative is simpler? The alternative being native DI. I guess I'm really wondering what is so great about Unity (or any IoC library) besides Service Locating which is bad. Is free Controller DI really the ONLY thing I get from Unity?
A good IoC container not only creates the concrete class for you, it examines the couplings between that type and other types. If there are additional dependencies, it resolves them and creates instances of all of the classes that are required.
You can do fancy things like conditional binding. Here's an example using Ninject (my preferred IoC):
ninjectKernel.Bind<IValueCalculator>().To<LinqValueCalculator>();
ninjectKernel.Bind<IValueCalculator>().To<IterativeValueCalculator().WhenInjectedInto<LimitShoppingCart>();
What ninject is doing here is creating an instance of IterativeValueCalculator when injecting into LimitShoppingCart and an instance of LinqValueCalulator for any other injection.
Greatest benefit is separation of concern (decoupling) and testability.
Regarding why Service Locator is considered bad(by some guys) you can read this blog-post by Mark Seeman.
Answering on your question What is so good in Unity I can say that apart from all the testability, loosely-coupling and other blah-blah-blah-s everyone is talking about you can use such awesome feature like Unity's Interception which allows to do some AOP-like things. I've used it in some of last projects and liked it pretty much. Strongly recommended!
p.s. Seems like Castle Windsor DI container has similar feature as well(called Interceptors). Other containers - not sure.
Besides testing (which is a huge benefit and should not be under estimated), dependency injection allows:
Maintainability: The ability to alter the behavior of your code with a single change.
If you decide to change the class that retrieves your users across all your controllers/services etc. without dependency injection, you need to update each and every constructor plus any other random new instances that are being created, provided you remember where each one lives. DI allows you to change one definition that is then used across all implementations.
Scope: With a single line of code you can alter your implementation to create a singleton, a class that is only created on each new web request or on each new thread
Readability: The use of dependency injection means that all your concrete classes are defined in one place. As a developer coming onto a project, I can quickly and easily see exactly which concrete classes are mapped to which interfaces and know that there are no hidden implemetations. It means I can not only read the code better but empowers me to have the confidence to develop against the code
Design: I believe using dependency injection helps create well designed code. You automatically code to interfaces, your code becomes cleaner because you haven't got strange blocks of code to help you test
And let's no forget...
Testing: Testing is huge! Dependency injection allows you to test your code without having to write code specifically for tests. Ok, you can create a new constructor, but what is stop anyone else using that constructor for a purpose it has not been intended for. What if another developer comes along six months later and adds production logic to your 'test' constructor. Ok, so you can make it internal but this can still be used by production code. Why give them the option.
My experience with IoC frameworks has been largely around Ninject. As such, the above is based on what I know of Ninject, however the same principles should remain the same across other frameworks.
No the main point is that you then have a mapping somewhere that specifies the concrete type of IUserRepository. And the reason that you might like that is that you can then create a UnitTest that specifies a mocked version of IUserRepository and execute your tests without changing anything in your controller.
Testability mostly, but id suggest looking at Ninject, it plays well with MVC. To get the full benefits of IOC you should really be combining this with Moq or a similar mocking framework.
Besides the testability, I think you can also add the extensibility as one of the advantages. One of the best practices of Software Development is "working with abstractions, not with implementations" and, while you can do it in several ways, Unity provides a very extensible and easy way to achieve this. By using this, you will be creating abstractions that will define the contracts that your component must comply. Say that you want to change totally the Repository that your application currently uses. With DI you just "swap" the component without the need of changing a single line of code on your application. If you are using hard references, you might need to change and recompile your application because of a component that is external to it (In a different layer)
So, bottom line, IMHO, using DI helps you to have pluggable components in your application and have a SOLID application design

Dependency injection container? What does it do?

I have been reading up on DI and it seems like a simple enough concept. What I don't get is the container. Let’s say for a moment that I want to create my own container. Verbs like "detect" are used and I don't get how the container "detects" that a new dependent object was created and know to inject it's dependencies. To me it seems like the container is a glorified factory.
Can any one explain how a container is actually implemented, or maybe point me to a resource?
Thank you!
This is taken from Windsor documentation
Inversion of Control
Inversion of Control is a principle used by frameworks as a way to
allow developers to extend the framework or create applications using
it. The basic idea is that the framework is aware of the programmer's
objects and makes invocations on them.
This is the opposite of using an API, where the developer's code makes
the invocations to the API code. Hence, frameworks invert the control:
it is not the developer code that is in charge, instead the framework
makes the calls based on some stimulus.
You have probably been in situations where you have developed under
the light of this principle, even though you were not aware of it.
Inversion of Control Container
An Inversion of Control Container uses the principle stated above to
(in a nutshell) manage classes. That is, their creation, destruction,
lifetime, configuration, and dependencies. This way classes do not
need to obtain and configure the classes they depend on. This
dramatically reduces coupling in a system and, as a consequence,
simplifies reuse and testability.
There is some confusion created by people that think that 'Inversion
of Control' is a synonym for 'Inversion of Control Container'. As
stated, Inversion of control is a broader principle.
Often people think that it is all about "injection", and broadcast
that this is the primary purpose of IoC containers. In fact,
"injection" is a consequence, a means to decouple, not the primary
purpose.
You might wanna read this book Dependency Injection In .NET... I have already read it, and I strongly recommend you reading it. It first gives a nice and insightful explanation on DI then shows code and patterns about real world applications of DI.
From this book, and in too few words...
"DI container is the technology used to support the DI technique" Page 55

Is testability alone justification for dependency injection?

The advantages of DI, as far as I am aware, are:
Reduced Dependencies
More Reusable Code
More Testable Code
More Readable Code
Say I have a repository, OrderRepository, which acts as a repository for an Order object generated through a Linq to Sql dbml. I can't make my orders repository generic as it performs mapping between the Linq Order entity and my own Order POCO domain class.
Since the OrderRepository by necessity is dependent on a specific Linq to Sql DataContext, parameter passing of the DataContext can't really be said to make the code reuseable or reduce dependencies in any meaningful way.
It also makes the code harder to read, as to instantiate the repository I now need to write
new OrdersRepository(new MyLinqDataContext())
which additionally is contrary to the main purpose of the repository, that being to abstract/hide the existence of the DataContext from consuming code.
So in general I think this would be a pretty horrible design, but it would give the benefit of facilitating unit testing. Is this enough justification? Or is there a third way? I'd be very interested in hearing opinions.
Dependency Injection's primary advantage is testing. And you've hit on something that seemed odd to me when I first started adopting Test Driven Development and DI. DI does break encapsulation. Unit tests should test implementation related decisions; as such, you end up exposing details that you wouldn't in a purely encapsulated scenario. Your example is a good one, where if you weren't doing test driven development, you would probably want to encapsulate the data context.
But where you say, Since the OrderRepository by necessity is dependent on a specific Linq to Sql DataContext, I would disagree - we have the same setup and are only dependent on an interface. You have to break that dependency.
Taking your example a step further however, how will you test your repository (or clients of it) without exercising the database? This is one of the core tenets of unit testing - you have to be able to test functionality without interacting with external systems. And nowhere does this matter more than with the database. Dependency Injection is the pattern that makes it possible to break dependencies on sub-systems and layers. Without it, unit tests end up requiring extensive fixture setup, become hard to write, fragile and too damn slow. As a result - you just won't write them.
Taking your example a step farther, you might have
In Unit Tests:
// From your example...
new OrdersRepository(new InMemoryDataContext());
// or...
IOrdersRepository repo = new InMemoryDataContext().OrdersRepository;
and In Production (using an IOC container):
// usually...
Container.Create<IDataContext>().OrdersRepository
// but can be...
Container.Create<IOrdersRepository>();
(If you haven't used an IOC container, they're the glue that makes DI work. Think of it as "make" (or ant) for object graphs...the container builds the dependency graph for you and does all of the heavy lifting for construction). In using an IOC container, you get back the dependency hiding that you mention in your OP. Dependencies are configured and handled by the container as a separate concern - and calling code can just ask for an instance of the interface.
There's a really excellent book that explores these issues in detail. Check out xUnit Test Patterns: Refactoring Test Code, by Mezaros. It's one of those books that takes your software development capabilities to the next level.
Dependency Injection is just a means to an end. It's a way to enable loose coupling.
Small comment first: dependency injection = IoC + dependency inversion. What matter the most for testing and what you actually describe, is dependency inversion.
Generally speaking, I think that testing justifies dependency inversion. But it doesn't justify dependency injection, I wouldn't introduce a DI container just for testing.
However, dependency inversion is a principle that can be bended a bit if necessary (like all principles). You can in particular use factories in some places to control the creation of object.
If you have DI container, it's what happens automatically; the DI container act as a factory and wires the object together.
The beauty of Dependency Injection is the ability to isolate your components.
One side-effect of isolation is easier unit-testing. Another is the ability to swap configurations for different environments. Yet another is the self-describing nature of each class.
However you take advantage of the isolation provided by DI, you have far more options than with tighter-coupled object models.
The power of dependency injection comes when you use an Inversion of Control container such as StructureMap. When using this, you won't see a "new" anywhere -- the container will take control of your object construction. That way everything is unaware of the dependencies.
I find that relationship between needing Dependency Injection container and testability is the other way around. I find DI extremely useful because I write unit testable code. And I write unit testable code because it's inherently better code- more loosely coupled smaller classes.
An IoC container is another tool in the toolbox that helps you manage complexity of the app - and it works quite well. I found it allows to better to code to an interface by taking instantiation out of the picture.
The design of tests here always depends on SUT (System Under Test). Ask yourself a question - what do I want to test?
If yours repository is just accessor to the database - then it is needed to test it like accessor - with database involvement. (Actually such kind of tests are not unit tests but integration)
If yours repository performs some mapping or business logic and acts like accessor to the database, then this is the case when it is needed to do decomposition in order to make your system to comply with SRP (Single Responsibility Principle). After decomposition you will have 2 entities:
OrdersRepository
OrderDataAccessor
Test them separately from each other, breaking dependencies with DI.
As of constructor ugliness... Use DI framework to construct your objects. For example with using Unity your constructor:
var repository = new OrdersRepository(new MyLinqDataContext());
will be look like:
var repository = container.Resolve<OrdersRepository>;
The jury is still out for me about the use of DI in the context of your question. You've asked if testing alone is justification for implementing DI, and I'm going to sound a little like a fence-sitter in answering this, even though my gut-response is to answer no.
If I answer yes, I am thinking about testing systems when you have nothing you can easily test directly. In the physical world, it's not unusual to include ports, access tunnels, lugs, etc, in order to provide a simple and direct means of testing the status of systems, machines, and so on. This seems reasonable in most cases. For example, an oil pipeline provides inspection hatches to allow equipment to be injected into the system for the purposes of testing and repair. These are purpose built, and provide no other function. The real question though is if this paradigm is suited to software development. Part of me would like to say yes, but the answer it seems would come at a cost, and leaves us in that lovely grey area of balancing benefits vs costs.
The "no" argument really comes down to the reasons and purposes for designing software systems. DI is a beautiful pattern for promoting the loose coupling of code, something we are taught in our OOP classes is a very important and powerful design concept for improving the maintainability of code. The problem is that like all tools, it can be misused. I'm going to disagree with Rob's answer above in part, because DI's advantages are NOT primarily testing, but in promoting loosely coupled architecture. And I'd argue that resorting to designing systems based solely on the ability to test them suggests in such cases that either the architecture is is flawed, or the test cases are inappropriately configured, and possibly even both.
A well-factored system architecture is in most cases inherently simple to test, and the introduction of mocking frameworks over the last decade makes the testing much easier still. Experience has taught me that any system that I found hard to test has had some aspect of it too tightly coupled in some way. Sometimes (more rarely) this has proven to be necessary, but in most cases it was not, and usually when a seemingly simple system seemed too hard to test, it was because the testing paradigm was flawed. I've seen DI used as a means to circumvent system design in order to allow a system to be tested, and the risks have certainly outweighed the intended rewards, with system architecture effectively corrupted. By that I mean back-doors into code resulting in security problems, code bloated with test-specific behaviour that is never used at runtime, and spaghettification of source code such that you needed a couple of Sherpa's and a Ouija Board just to figure out which way was up! All of this in shipped production code. The resulting costs in terms of maintenance, learning curve etc can be astronomical, and to small companies, such losses can prove to be devastating in the longer term.
IMHO, I don't believe that DI should ever be used as a means to simply improve testability of code. If DI is your only option for testing, then the design usually needs to be refactored. On the other hand, implementing DI by design where it can be used as a part of the run-time code can provide clear advantages, but should not be misused as it can also result in classes being misused, and it should not be over-used simply because it seems cool and easy, as it can in such cases over-complicate the design of your code.
:-)
It's possible to write generic data access objects in Java:
package persistence;
import java.io.Serializable;
import java.util.List;
public interface GenericDao<T, K extends Serializable>
{
T find(K id);
List<T> find();
List<T> find(T example);
List<T> find(String queryName, String [] paramNames, Object [] bindValues);
K save(T instance);
void update(T instance);
void delete(T instance);
}
I can't speak for LINQ, but .NET has generics so it should be possible.

What is the use of spring.net?

We are developing an application using Silverlight and WCF Services. Is using Spring.Net is beneficial for us?
>> "Is using Spring.Net is beneficial for us?"
I think the spirit of your question is really geared more towards questioning the benefit of using an IoC/DI framework versus manually managing dependencies as needed. My response will focus more on the why and why not of IoC/DI and not so much on which specific framework to use.
As Martin Fowler mentioned at a recent conference, DI allows you to separate configuration from usage. For me, thinking about DI in the light of configuration and usage as separate concerns is a great way to start asking the right questions. Is there a need for your application to have multiple configurations for your dependencies? Does your app need the ability to modify behavior by configuration? Keep in mind, this means that dependencies are resolved at runtime and typically require an XML configuration file which is nice because changes can be made without requiring a recompile of the assembly. Personally, I'm not a fan of XML-based configuration of dependencies as they end up being consumed as "magic strings". So there's the danger of introducing runtime errors if you end up misspelling a class name, etc. But if you need the ability to configure on-the-fly, this is probably the best solution today.
On the other hand, there are DI frameworks like Ninject and StructureMap that allow fluent in-code dependency definitions. You lose the ability to change definitions on-the-fly, but you get the added benefit of compile time validations, which I prefer. If all you want from a DI framework is to resolve dependencies then you could eliminate XML-based frameworks from the equation.
From a Silverlight perspective, DI can be used in various ways. The most obvious is to define the relationship of Views to ViewModels. Going deeper, however, you can define validation, and RIA context dependencies, etc. Having all of the dependencies defined in a configuration class keeps the code free from needing to know how to get/create instances and instead focus on usage. Don't forget that the container can manage the lifetime of each object instance based on your config. So if you need to share an instance of a type (e.g. Singleton, ManagedThread, etc.), this is supported by declaring the lifetime scope of each type registered with the container.
I just realized at this point I'm ranting and I apologize. Hope this helps!
Personally i'd recommend using either Castle or Unity as i've had great success with both and found them both, while different, excellent IOC frameworks.
Besides the IOC component they also provide other nifty tools (AOP in Castle, Interface interception in Unity, for example) which you will no doubt find a use for in the future, and having an IOC framework in place from the start is ALWAYS a hell of a lot easier than trying to retrofit it.
It's incredibly easy to setup and configure, although personally i'm not a huge fan of the XML config way of doing things as some of those config files can turn into a total nightmare. A lot of people will tell you that it's only worth doing if you intend to swap components in and out, but why not just do that anyway IN CASE you decide you need to do that later. it's better to have it and not use it, than not have it and need it. If you're worried about perf hit i've seen on many blog posts around the web people comparing the various IOC frameworks for their speed and unless you're creating brain surgery robots or the US Missile defence platform it won't be an issue.
A DI Framework might be of use if you want to change big chunks of your application without having to rewrite your constructors. For example, you might want to use a comet streaming service that you will expose through an interface, and later decide that you'd rather use a dedicated messenging system such as MQ or RendezVous. You will then write an adapter to Mq that respects the common facade and just change the spring config to use the Mq implementation rather than the Comet one.
But for the love of tony the pony, don't use Spring.Net to create your MVVM/MVP/MVC bindings for each and every view or you'll enter a world of pain.
DI is a great tool when used with parcimony, please don't end-up with 243 spring configuration files, for your devs' sanity.
Using an IOC container such as Spring.Net is beneficial as it will enable you to unit test parts of your UI by swapping in mocked or special test implementations of the applications interfaces. In the long run, this should make your application more maintainable for future developers.
I think if you do more in the code rather than using the markup to do bindings etc. and have a BAL/DAL DI can help there because it can inject the correct business component reference (as one example). DI has many other practical advantages, but then you have to do more in code and less in markup.

Categories

Resources